00:00:00.001 Started by upstream project "autotest-per-patch" build number 132712 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.029 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:07.644 The recommended git tool is: git 00:00:07.644 using credential 00000000-0000-0000-0000-000000000002 00:00:07.647 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:07.657 Fetching changes from the remote Git repository 00:00:07.663 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:07.673 Using shallow fetch with depth 1 00:00:07.673 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:07.673 > git --version # timeout=10 00:00:07.684 > git --version # 'git version 2.39.2' 00:00:07.684 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:07.695 Setting http proxy: proxy-dmz.intel.com:911 00:00:07.695 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:12.500 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:12.511 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:12.522 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:12.522 > git config core.sparsecheckout # timeout=10 00:00:12.536 > git read-tree -mu HEAD # timeout=10 00:00:12.552 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:12.572 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:12.572 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:12.666 [Pipeline] Start of Pipeline 00:00:12.680 [Pipeline] library 00:00:12.689 Loading library shm_lib@master 00:00:12.689 Library shm_lib@master is cached. Copying from home. 00:00:12.703 [Pipeline] node 00:00:12.710 Running on VM-host-SM17 in /var/jenkins/workspace/raid-vg-autotest_3 00:00:12.711 [Pipeline] { 00:00:12.717 [Pipeline] catchError 00:00:12.719 [Pipeline] { 00:00:12.727 [Pipeline] wrap 00:00:12.733 [Pipeline] { 00:00:12.739 [Pipeline] stage 00:00:12.740 [Pipeline] { (Prologue) 00:00:12.752 [Pipeline] echo 00:00:12.753 Node: VM-host-SM17 00:00:12.757 [Pipeline] cleanWs 00:00:12.767 [WS-CLEANUP] Deleting project workspace... 00:00:12.767 [WS-CLEANUP] Deferred wipeout is used... 00:00:12.792 [WS-CLEANUP] done 00:00:13.018 [Pipeline] setCustomBuildProperty 00:00:13.089 [Pipeline] httpRequest 00:00:13.417 [Pipeline] echo 00:00:13.419 Sorcerer 10.211.164.20 is alive 00:00:13.449 [Pipeline] retry 00:00:13.450 [Pipeline] { 00:00:13.461 [Pipeline] httpRequest 00:00:13.465 HttpMethod: GET 00:00:13.465 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.465 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.488 Response Code: HTTP/1.1 200 OK 00:00:13.489 Success: Status code 200 is in the accepted range: 200,404 00:00:13.489 Saving response body to /var/jenkins/workspace/raid-vg-autotest_3/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:43.995 [Pipeline] } 00:00:44.014 [Pipeline] // retry 00:00:44.022 [Pipeline] sh 00:00:44.305 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:44.322 [Pipeline] httpRequest 00:00:44.784 [Pipeline] echo 00:00:44.786 Sorcerer 10.211.164.20 is alive 00:00:44.797 [Pipeline] retry 00:00:44.800 [Pipeline] { 00:00:44.817 [Pipeline] httpRequest 00:00:44.822 HttpMethod: GET 00:00:44.822 URL: http://10.211.164.20/packages/spdk_20bebc9975fc43126ba752184b85e168edda730a.tar.gz 00:00:44.823 Sending request to url: http://10.211.164.20/packages/spdk_20bebc9975fc43126ba752184b85e168edda730a.tar.gz 00:00:44.828 Response Code: HTTP/1.1 200 OK 00:00:44.829 Success: Status code 200 is in the accepted range: 200,404 00:00:44.829 Saving response body to /var/jenkins/workspace/raid-vg-autotest_3/spdk_20bebc9975fc43126ba752184b85e168edda730a.tar.gz 00:04:38.194 [Pipeline] } 00:04:38.210 [Pipeline] // retry 00:04:38.217 [Pipeline] sh 00:04:38.492 + tar --no-same-owner -xf spdk_20bebc9975fc43126ba752184b85e168edda730a.tar.gz 00:04:41.788 [Pipeline] sh 00:04:42.073 + git -C spdk log --oneline -n5 00:04:42.073 20bebc997 lib/reduce: Support storing metadata on backing dev. (4 of 5, data unmap with async metadata) 00:04:42.073 3fb854a13 lib/reduce: Support storing metadata on backing dev. (3 of 5, reload process) 00:04:42.073 f501a7223 lib/reduce: Support storing metadata on backing dev. (2 of 5, data r/w with async metadata) 00:04:42.073 8ffb12d0f lib/reduce: Support storing metadata on backing dev. (1 of 5, struct define and init process) 00:04:42.073 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:04:42.096 [Pipeline] writeFile 00:04:42.112 [Pipeline] sh 00:04:42.393 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:04:42.406 [Pipeline] sh 00:04:42.687 + cat autorun-spdk.conf 00:04:42.687 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:42.687 SPDK_RUN_ASAN=1 00:04:42.687 SPDK_RUN_UBSAN=1 00:04:42.687 SPDK_TEST_RAID=1 00:04:42.687 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:42.694 RUN_NIGHTLY=0 00:04:42.696 [Pipeline] } 00:04:42.710 [Pipeline] // stage 00:04:42.727 [Pipeline] stage 00:04:42.729 [Pipeline] { (Run VM) 00:04:42.742 [Pipeline] sh 00:04:43.024 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:04:43.024 + echo 'Start stage prepare_nvme.sh' 00:04:43.024 Start stage prepare_nvme.sh 00:04:43.024 + [[ -n 7 ]] 00:04:43.024 + disk_prefix=ex7 00:04:43.024 + [[ -n /var/jenkins/workspace/raid-vg-autotest_3 ]] 00:04:43.024 + [[ -e /var/jenkins/workspace/raid-vg-autotest_3/autorun-spdk.conf ]] 00:04:43.024 + source /var/jenkins/workspace/raid-vg-autotest_3/autorun-spdk.conf 00:04:43.024 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:43.024 ++ SPDK_RUN_ASAN=1 00:04:43.024 ++ SPDK_RUN_UBSAN=1 00:04:43.024 ++ SPDK_TEST_RAID=1 00:04:43.024 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:43.024 ++ RUN_NIGHTLY=0 00:04:43.024 + cd /var/jenkins/workspace/raid-vg-autotest_3 00:04:43.024 + nvme_files=() 00:04:43.024 + declare -A nvme_files 00:04:43.024 + backend_dir=/var/lib/libvirt/images/backends 00:04:43.024 + nvme_files['nvme.img']=5G 00:04:43.024 + nvme_files['nvme-cmb.img']=5G 00:04:43.024 + nvme_files['nvme-multi0.img']=4G 00:04:43.024 + nvme_files['nvme-multi1.img']=4G 00:04:43.024 + nvme_files['nvme-multi2.img']=4G 00:04:43.024 + nvme_files['nvme-openstack.img']=8G 00:04:43.024 + nvme_files['nvme-zns.img']=5G 00:04:43.024 + (( SPDK_TEST_NVME_PMR == 1 )) 00:04:43.024 + (( SPDK_TEST_FTL == 1 )) 00:04:43.024 + (( SPDK_TEST_NVME_FDP == 1 )) 00:04:43.024 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:04:43.024 + for nvme in "${!nvme_files[@]}" 00:04:43.024 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:04:43.024 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:04:43.024 + for nvme in "${!nvme_files[@]}" 00:04:43.024 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:04:43.024 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:04:43.024 + for nvme in "${!nvme_files[@]}" 00:04:43.024 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:04:43.025 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:04:43.025 + for nvme in "${!nvme_files[@]}" 00:04:43.025 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:04:43.025 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:04:43.025 + for nvme in "${!nvme_files[@]}" 00:04:43.025 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:04:43.025 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:04:43.025 + for nvme in "${!nvme_files[@]}" 00:04:43.025 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:04:43.025 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:04:43.025 + for nvme in "${!nvme_files[@]}" 00:04:43.025 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:04:43.283 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:04:43.283 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:04:43.283 + echo 'End stage prepare_nvme.sh' 00:04:43.283 End stage prepare_nvme.sh 00:04:43.295 [Pipeline] sh 00:04:43.575 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:04:43.575 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora39 00:04:43.575 00:04:43.575 DIR=/var/jenkins/workspace/raid-vg-autotest_3/spdk/scripts/vagrant 00:04:43.575 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_3/spdk 00:04:43.575 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_3 00:04:43.575 HELP=0 00:04:43.575 DRY_RUN=0 00:04:43.575 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:04:43.575 NVME_DISKS_TYPE=nvme,nvme, 00:04:43.575 NVME_AUTO_CREATE=0 00:04:43.575 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:04:43.575 NVME_CMB=,, 00:04:43.575 NVME_PMR=,, 00:04:43.575 NVME_ZNS=,, 00:04:43.575 NVME_MS=,, 00:04:43.575 NVME_FDP=,, 00:04:43.575 SPDK_VAGRANT_DISTRO=fedora39 00:04:43.575 SPDK_VAGRANT_VMCPU=10 00:04:43.575 SPDK_VAGRANT_VMRAM=12288 00:04:43.575 SPDK_VAGRANT_PROVIDER=libvirt 00:04:43.575 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:04:43.575 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:04:43.575 SPDK_OPENSTACK_NETWORK=0 00:04:43.575 VAGRANT_PACKAGE_BOX=0 00:04:43.575 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:04:43.575 FORCE_DISTRO=true 00:04:43.575 VAGRANT_BOX_VERSION= 00:04:43.575 EXTRA_VAGRANTFILES= 00:04:43.575 NIC_MODEL=e1000 00:04:43.575 00:04:43.575 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt' 00:04:43.575 /var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_3 00:04:46.889 Bringing machine 'default' up with 'libvirt' provider... 00:04:47.148 ==> default: Creating image (snapshot of base box volume). 00:04:47.407 ==> default: Creating domain with the following settings... 00:04:47.407 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733466665_5156afce1eb92c346e83 00:04:47.407 ==> default: -- Domain type: kvm 00:04:47.407 ==> default: -- Cpus: 10 00:04:47.407 ==> default: -- Feature: acpi 00:04:47.407 ==> default: -- Feature: apic 00:04:47.407 ==> default: -- Feature: pae 00:04:47.407 ==> default: -- Memory: 12288M 00:04:47.407 ==> default: -- Memory Backing: hugepages: 00:04:47.407 ==> default: -- Management MAC: 00:04:47.407 ==> default: -- Loader: 00:04:47.407 ==> default: -- Nvram: 00:04:47.407 ==> default: -- Base box: spdk/fedora39 00:04:47.407 ==> default: -- Storage pool: default 00:04:47.407 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733466665_5156afce1eb92c346e83.img (20G) 00:04:47.407 ==> default: -- Volume Cache: default 00:04:47.407 ==> default: -- Kernel: 00:04:47.407 ==> default: -- Initrd: 00:04:47.407 ==> default: -- Graphics Type: vnc 00:04:47.407 ==> default: -- Graphics Port: -1 00:04:47.407 ==> default: -- Graphics IP: 127.0.0.1 00:04:47.407 ==> default: -- Graphics Password: Not defined 00:04:47.407 ==> default: -- Video Type: cirrus 00:04:47.407 ==> default: -- Video VRAM: 9216 00:04:47.407 ==> default: -- Sound Type: 00:04:47.407 ==> default: -- Keymap: en-us 00:04:47.407 ==> default: -- TPM Path: 00:04:47.407 ==> default: -- INPUT: type=mouse, bus=ps2 00:04:47.407 ==> default: -- Command line args: 00:04:47.407 ==> default: -> value=-device, 00:04:47.407 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:04:47.407 ==> default: -> value=-drive, 00:04:47.407 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:04:47.407 ==> default: -> value=-device, 00:04:47.407 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:47.407 ==> default: -> value=-device, 00:04:47.407 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:04:47.407 ==> default: -> value=-drive, 00:04:47.407 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:04:47.407 ==> default: -> value=-device, 00:04:47.407 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:47.407 ==> default: -> value=-drive, 00:04:47.407 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:04:47.407 ==> default: -> value=-device, 00:04:47.407 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:47.407 ==> default: -> value=-drive, 00:04:47.407 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:04:47.408 ==> default: -> value=-device, 00:04:47.408 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:47.408 ==> default: Creating shared folders metadata... 00:04:47.668 ==> default: Starting domain. 00:04:49.044 ==> default: Waiting for domain to get an IP address... 00:05:07.131 ==> default: Waiting for SSH to become available... 00:05:07.131 ==> default: Configuring and enabling network interfaces... 00:05:09.756 default: SSH address: 192.168.121.191:22 00:05:09.756 default: SSH username: vagrant 00:05:09.756 default: SSH auth method: private key 00:05:11.658 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:05:19.783 ==> default: Mounting SSHFS shared folder... 00:05:20.716 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:05:20.716 ==> default: Checking Mount.. 00:05:22.093 ==> default: Folder Successfully Mounted! 00:05:22.093 ==> default: Running provisioner: file... 00:05:23.027 default: ~/.gitconfig => .gitconfig 00:05:23.285 00:05:23.285 SUCCESS! 00:05:23.285 00:05:23.285 cd to /var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt and type "vagrant ssh" to use. 00:05:23.285 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:05:23.285 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt" to destroy all trace of vm. 00:05:23.285 00:05:23.292 [Pipeline] } 00:05:23.305 [Pipeline] // stage 00:05:23.313 [Pipeline] dir 00:05:23.314 Running in /var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt 00:05:23.315 [Pipeline] { 00:05:23.327 [Pipeline] catchError 00:05:23.329 [Pipeline] { 00:05:23.342 [Pipeline] sh 00:05:23.619 + vagrant ssh-config --host vagrant 00:05:23.619 + sed -ne /^Host/,$p 00:05:23.619 + tee ssh_conf 00:05:27.804 Host vagrant 00:05:27.804 HostName 192.168.121.191 00:05:27.804 User vagrant 00:05:27.804 Port 22 00:05:27.804 UserKnownHostsFile /dev/null 00:05:27.804 StrictHostKeyChecking no 00:05:27.804 PasswordAuthentication no 00:05:27.804 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:05:27.804 IdentitiesOnly yes 00:05:27.804 LogLevel FATAL 00:05:27.804 ForwardAgent yes 00:05:27.804 ForwardX11 yes 00:05:27.804 00:05:27.816 [Pipeline] withEnv 00:05:27.818 [Pipeline] { 00:05:27.831 [Pipeline] sh 00:05:28.109 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:05:28.110 source /etc/os-release 00:05:28.110 [[ -e /image.version ]] && img=$(< /image.version) 00:05:28.110 # Minimal, systemd-like check. 00:05:28.110 if [[ -e /.dockerenv ]]; then 00:05:28.110 # Clear garbage from the node's name: 00:05:28.110 # agt-er_autotest_547-896 -> autotest_547-896 00:05:28.110 # $HOSTNAME is the actual container id 00:05:28.110 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:05:28.110 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:05:28.110 # We can assume this is a mount from a host where container is running, 00:05:28.110 # so fetch its hostname to easily identify the target swarm worker. 00:05:28.110 container="$(< /etc/hostname) ($agent)" 00:05:28.110 else 00:05:28.110 # Fallback 00:05:28.110 container=$agent 00:05:28.110 fi 00:05:28.110 fi 00:05:28.110 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:05:28.110 00:05:28.380 [Pipeline] } 00:05:28.395 [Pipeline] // withEnv 00:05:28.405 [Pipeline] setCustomBuildProperty 00:05:28.420 [Pipeline] stage 00:05:28.423 [Pipeline] { (Tests) 00:05:28.437 [Pipeline] sh 00:05:28.714 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:05:28.986 [Pipeline] sh 00:05:29.267 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:05:29.541 [Pipeline] timeout 00:05:29.542 Timeout set to expire in 1 hr 30 min 00:05:29.544 [Pipeline] { 00:05:29.559 [Pipeline] sh 00:05:29.839 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:05:30.407 HEAD is now at 20bebc997 lib/reduce: Support storing metadata on backing dev. (4 of 5, data unmap with async metadata) 00:05:30.422 [Pipeline] sh 00:05:30.760 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:05:30.809 [Pipeline] sh 00:05:31.090 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:05:31.107 [Pipeline] sh 00:05:31.386 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:05:31.647 ++ readlink -f spdk_repo 00:05:31.647 + DIR_ROOT=/home/vagrant/spdk_repo 00:05:31.647 + [[ -n /home/vagrant/spdk_repo ]] 00:05:31.647 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:05:31.647 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:05:31.647 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:05:31.647 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:05:31.647 + [[ -d /home/vagrant/spdk_repo/output ]] 00:05:31.647 + [[ raid-vg-autotest == pkgdep-* ]] 00:05:31.647 + cd /home/vagrant/spdk_repo 00:05:31.647 + source /etc/os-release 00:05:31.647 ++ NAME='Fedora Linux' 00:05:31.647 ++ VERSION='39 (Cloud Edition)' 00:05:31.647 ++ ID=fedora 00:05:31.647 ++ VERSION_ID=39 00:05:31.647 ++ VERSION_CODENAME= 00:05:31.647 ++ PLATFORM_ID=platform:f39 00:05:31.647 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:05:31.647 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:31.647 ++ LOGO=fedora-logo-icon 00:05:31.647 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:05:31.647 ++ HOME_URL=https://fedoraproject.org/ 00:05:31.647 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:05:31.647 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:31.647 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:31.647 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:31.647 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:05:31.647 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:31.647 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:05:31.647 ++ SUPPORT_END=2024-11-12 00:05:31.647 ++ VARIANT='Cloud Edition' 00:05:31.647 ++ VARIANT_ID=cloud 00:05:31.647 + uname -a 00:05:31.647 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:05:31.647 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:31.905 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:31.905 Hugepages 00:05:31.905 node hugesize free / total 00:05:31.905 node0 1048576kB 0 / 0 00:05:31.905 node0 2048kB 0 / 0 00:05:31.905 00:05:31.905 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:32.163 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:32.163 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:32.163 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:32.163 + rm -f /tmp/spdk-ld-path 00:05:32.163 + source autorun-spdk.conf 00:05:32.163 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:32.163 ++ SPDK_RUN_ASAN=1 00:05:32.163 ++ SPDK_RUN_UBSAN=1 00:05:32.163 ++ SPDK_TEST_RAID=1 00:05:32.163 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:32.163 ++ RUN_NIGHTLY=0 00:05:32.163 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:32.163 + [[ -n '' ]] 00:05:32.163 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:05:32.163 + for M in /var/spdk/build-*-manifest.txt 00:05:32.163 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:05:32.163 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:32.163 + for M in /var/spdk/build-*-manifest.txt 00:05:32.163 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:32.163 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:32.164 + for M in /var/spdk/build-*-manifest.txt 00:05:32.164 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:32.164 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:32.164 ++ uname 00:05:32.164 + [[ Linux == \L\i\n\u\x ]] 00:05:32.164 + sudo dmesg -T 00:05:32.164 + sudo dmesg --clear 00:05:32.164 + dmesg_pid=5205 00:05:32.164 + sudo dmesg -Tw 00:05:32.164 + [[ Fedora Linux == FreeBSD ]] 00:05:32.164 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:32.164 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:32.164 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:32.164 + [[ -x /usr/src/fio-static/fio ]] 00:05:32.164 + export FIO_BIN=/usr/src/fio-static/fio 00:05:32.164 + FIO_BIN=/usr/src/fio-static/fio 00:05:32.164 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:32.164 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:32.164 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:32.164 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:32.164 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:32.164 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:32.164 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:32.164 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:32.164 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:32.164 06:31:50 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:05:32.164 06:31:50 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:32.164 06:31:50 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:32.164 06:31:50 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:05:32.164 06:31:50 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:05:32.164 06:31:50 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:05:32.164 06:31:50 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:32.164 06:31:50 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:05:32.164 06:31:50 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:05:32.164 06:31:50 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:32.422 06:31:50 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:05:32.422 06:31:50 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:32.422 06:31:50 -- scripts/common.sh@15 -- $ shopt -s extglob 00:05:32.422 06:31:50 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:32.422 06:31:50 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:32.422 06:31:50 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:32.422 06:31:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.422 06:31:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.422 06:31:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.422 06:31:50 -- paths/export.sh@5 -- $ export PATH 00:05:32.422 06:31:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.422 06:31:50 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:05:32.422 06:31:50 -- common/autobuild_common.sh@493 -- $ date +%s 00:05:32.422 06:31:50 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733466710.XXXXXX 00:05:32.422 06:31:50 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733466710.vvUfmT 00:05:32.422 06:31:50 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:05:32.422 06:31:50 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:05:32.422 06:31:50 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:05:32.422 06:31:50 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:05:32.422 06:31:50 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:05:32.422 06:31:50 -- common/autobuild_common.sh@509 -- $ get_config_params 00:05:32.422 06:31:50 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:05:32.422 06:31:50 -- common/autotest_common.sh@10 -- $ set +x 00:05:32.422 06:31:50 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:05:32.422 06:31:50 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:05:32.422 06:31:50 -- pm/common@17 -- $ local monitor 00:05:32.422 06:31:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:32.422 06:31:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:32.422 06:31:50 -- pm/common@25 -- $ sleep 1 00:05:32.422 06:31:50 -- pm/common@21 -- $ date +%s 00:05:32.422 06:31:50 -- pm/common@21 -- $ date +%s 00:05:32.422 06:31:50 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733466710 00:05:32.422 06:31:50 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733466710 00:05:32.422 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733466710_collect-cpu-load.pm.log 00:05:32.422 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733466710_collect-vmstat.pm.log 00:05:33.358 06:31:51 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:05:33.358 06:31:51 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:33.358 06:31:51 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:33.358 06:31:51 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:05:33.358 06:31:51 -- spdk/autobuild.sh@16 -- $ date -u 00:05:33.358 Fri Dec 6 06:31:51 AM UTC 2024 00:05:33.358 06:31:51 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:33.358 v25.01-pre-307-g20bebc997 00:05:33.359 06:31:51 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:05:33.359 06:31:51 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:05:33.359 06:31:51 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:33.359 06:31:51 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:33.359 06:31:51 -- common/autotest_common.sh@10 -- $ set +x 00:05:33.359 ************************************ 00:05:33.359 START TEST asan 00:05:33.359 ************************************ 00:05:33.359 using asan 00:05:33.359 06:31:51 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:05:33.359 00:05:33.359 real 0m0.000s 00:05:33.359 user 0m0.000s 00:05:33.359 sys 0m0.000s 00:05:33.359 06:31:51 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:33.359 ************************************ 00:05:33.359 06:31:51 asan -- common/autotest_common.sh@10 -- $ set +x 00:05:33.359 END TEST asan 00:05:33.359 ************************************ 00:05:33.359 06:31:51 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:33.359 06:31:51 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:33.359 06:31:51 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:33.359 06:31:51 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:33.359 06:31:51 -- common/autotest_common.sh@10 -- $ set +x 00:05:33.359 ************************************ 00:05:33.359 START TEST ubsan 00:05:33.359 ************************************ 00:05:33.359 using ubsan 00:05:33.359 06:31:51 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:05:33.359 00:05:33.359 real 0m0.000s 00:05:33.359 user 0m0.000s 00:05:33.359 sys 0m0.000s 00:05:33.359 06:31:51 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:33.359 06:31:51 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:33.359 ************************************ 00:05:33.359 END TEST ubsan 00:05:33.359 ************************************ 00:05:33.618 06:31:52 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:33.618 06:31:52 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:33.618 06:31:52 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:33.618 06:31:52 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:33.618 06:31:52 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:33.618 06:31:52 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:33.618 06:31:52 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:33.618 06:31:52 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:33.618 06:31:52 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:05:33.618 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:33.618 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:34.188 Using 'verbs' RDMA provider 00:05:50.022 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:06:02.227 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:06:02.227 Creating mk/config.mk...done. 00:06:02.227 Creating mk/cc.flags.mk...done. 00:06:02.227 Type 'make' to build. 00:06:02.227 06:32:20 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:06:02.227 06:32:20 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:02.227 06:32:20 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:02.227 06:32:20 -- common/autotest_common.sh@10 -- $ set +x 00:06:02.227 ************************************ 00:06:02.227 START TEST make 00:06:02.227 ************************************ 00:06:02.227 06:32:20 make -- common/autotest_common.sh@1129 -- $ make -j10 00:06:02.227 make[1]: Nothing to be done for 'all'. 00:06:17.103 The Meson build system 00:06:17.103 Version: 1.5.0 00:06:17.103 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:06:17.103 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:06:17.103 Build type: native build 00:06:17.103 Program cat found: YES (/usr/bin/cat) 00:06:17.103 Project name: DPDK 00:06:17.103 Project version: 24.03.0 00:06:17.103 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:17.103 C linker for the host machine: cc ld.bfd 2.40-14 00:06:17.103 Host machine cpu family: x86_64 00:06:17.103 Host machine cpu: x86_64 00:06:17.103 Message: ## Building in Developer Mode ## 00:06:17.103 Program pkg-config found: YES (/usr/bin/pkg-config) 00:06:17.103 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:06:17.103 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:06:17.103 Program python3 found: YES (/usr/bin/python3) 00:06:17.103 Program cat found: YES (/usr/bin/cat) 00:06:17.103 Compiler for C supports arguments -march=native: YES 00:06:17.103 Checking for size of "void *" : 8 00:06:17.103 Checking for size of "void *" : 8 (cached) 00:06:17.103 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:06:17.103 Library m found: YES 00:06:17.103 Library numa found: YES 00:06:17.103 Has header "numaif.h" : YES 00:06:17.103 Library fdt found: NO 00:06:17.103 Library execinfo found: NO 00:06:17.103 Has header "execinfo.h" : YES 00:06:17.103 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:17.103 Run-time dependency libarchive found: NO (tried pkgconfig) 00:06:17.103 Run-time dependency libbsd found: NO (tried pkgconfig) 00:06:17.103 Run-time dependency jansson found: NO (tried pkgconfig) 00:06:17.103 Run-time dependency openssl found: YES 3.1.1 00:06:17.103 Run-time dependency libpcap found: YES 1.10.4 00:06:17.103 Has header "pcap.h" with dependency libpcap: YES 00:06:17.103 Compiler for C supports arguments -Wcast-qual: YES 00:06:17.103 Compiler for C supports arguments -Wdeprecated: YES 00:06:17.103 Compiler for C supports arguments -Wformat: YES 00:06:17.103 Compiler for C supports arguments -Wformat-nonliteral: NO 00:06:17.103 Compiler for C supports arguments -Wformat-security: NO 00:06:17.103 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:17.103 Compiler for C supports arguments -Wmissing-prototypes: YES 00:06:17.103 Compiler for C supports arguments -Wnested-externs: YES 00:06:17.103 Compiler for C supports arguments -Wold-style-definition: YES 00:06:17.103 Compiler for C supports arguments -Wpointer-arith: YES 00:06:17.103 Compiler for C supports arguments -Wsign-compare: YES 00:06:17.103 Compiler for C supports arguments -Wstrict-prototypes: YES 00:06:17.103 Compiler for C supports arguments -Wundef: YES 00:06:17.103 Compiler for C supports arguments -Wwrite-strings: YES 00:06:17.103 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:06:17.103 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:06:17.103 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:17.103 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:06:17.103 Program objdump found: YES (/usr/bin/objdump) 00:06:17.103 Compiler for C supports arguments -mavx512f: YES 00:06:17.103 Checking if "AVX512 checking" compiles: YES 00:06:17.103 Fetching value of define "__SSE4_2__" : 1 00:06:17.103 Fetching value of define "__AES__" : 1 00:06:17.103 Fetching value of define "__AVX__" : 1 00:06:17.103 Fetching value of define "__AVX2__" : 1 00:06:17.103 Fetching value of define "__AVX512BW__" : (undefined) 00:06:17.103 Fetching value of define "__AVX512CD__" : (undefined) 00:06:17.103 Fetching value of define "__AVX512DQ__" : (undefined) 00:06:17.103 Fetching value of define "__AVX512F__" : (undefined) 00:06:17.103 Fetching value of define "__AVX512VL__" : (undefined) 00:06:17.103 Fetching value of define "__PCLMUL__" : 1 00:06:17.103 Fetching value of define "__RDRND__" : 1 00:06:17.103 Fetching value of define "__RDSEED__" : 1 00:06:17.103 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:06:17.103 Fetching value of define "__znver1__" : (undefined) 00:06:17.103 Fetching value of define "__znver2__" : (undefined) 00:06:17.103 Fetching value of define "__znver3__" : (undefined) 00:06:17.103 Fetching value of define "__znver4__" : (undefined) 00:06:17.103 Library asan found: YES 00:06:17.103 Compiler for C supports arguments -Wno-format-truncation: YES 00:06:17.103 Message: lib/log: Defining dependency "log" 00:06:17.103 Message: lib/kvargs: Defining dependency "kvargs" 00:06:17.103 Message: lib/telemetry: Defining dependency "telemetry" 00:06:17.103 Library rt found: YES 00:06:17.103 Checking for function "getentropy" : NO 00:06:17.103 Message: lib/eal: Defining dependency "eal" 00:06:17.103 Message: lib/ring: Defining dependency "ring" 00:06:17.103 Message: lib/rcu: Defining dependency "rcu" 00:06:17.103 Message: lib/mempool: Defining dependency "mempool" 00:06:17.103 Message: lib/mbuf: Defining dependency "mbuf" 00:06:17.103 Fetching value of define "__PCLMUL__" : 1 (cached) 00:06:17.103 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:06:17.103 Compiler for C supports arguments -mpclmul: YES 00:06:17.103 Compiler for C supports arguments -maes: YES 00:06:17.103 Compiler for C supports arguments -mavx512f: YES (cached) 00:06:17.103 Compiler for C supports arguments -mavx512bw: YES 00:06:17.103 Compiler for C supports arguments -mavx512dq: YES 00:06:17.103 Compiler for C supports arguments -mavx512vl: YES 00:06:17.103 Compiler for C supports arguments -mvpclmulqdq: YES 00:06:17.103 Compiler for C supports arguments -mavx2: YES 00:06:17.103 Compiler for C supports arguments -mavx: YES 00:06:17.103 Message: lib/net: Defining dependency "net" 00:06:17.103 Message: lib/meter: Defining dependency "meter" 00:06:17.103 Message: lib/ethdev: Defining dependency "ethdev" 00:06:17.103 Message: lib/pci: Defining dependency "pci" 00:06:17.103 Message: lib/cmdline: Defining dependency "cmdline" 00:06:17.103 Message: lib/hash: Defining dependency "hash" 00:06:17.103 Message: lib/timer: Defining dependency "timer" 00:06:17.103 Message: lib/compressdev: Defining dependency "compressdev" 00:06:17.103 Message: lib/cryptodev: Defining dependency "cryptodev" 00:06:17.103 Message: lib/dmadev: Defining dependency "dmadev" 00:06:17.103 Compiler for C supports arguments -Wno-cast-qual: YES 00:06:17.103 Message: lib/power: Defining dependency "power" 00:06:17.103 Message: lib/reorder: Defining dependency "reorder" 00:06:17.103 Message: lib/security: Defining dependency "security" 00:06:17.103 Has header "linux/userfaultfd.h" : YES 00:06:17.103 Has header "linux/vduse.h" : YES 00:06:17.103 Message: lib/vhost: Defining dependency "vhost" 00:06:17.103 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:06:17.103 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:06:17.103 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:06:17.103 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:06:17.103 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:06:17.103 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:06:17.103 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:06:17.103 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:06:17.103 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:06:17.103 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:06:17.103 Program doxygen found: YES (/usr/local/bin/doxygen) 00:06:17.103 Configuring doxy-api-html.conf using configuration 00:06:17.103 Configuring doxy-api-man.conf using configuration 00:06:17.103 Program mandb found: YES (/usr/bin/mandb) 00:06:17.103 Program sphinx-build found: NO 00:06:17.103 Configuring rte_build_config.h using configuration 00:06:17.103 Message: 00:06:17.103 ================= 00:06:17.103 Applications Enabled 00:06:17.103 ================= 00:06:17.103 00:06:17.103 apps: 00:06:17.103 00:06:17.103 00:06:17.103 Message: 00:06:17.103 ================= 00:06:17.103 Libraries Enabled 00:06:17.103 ================= 00:06:17.103 00:06:17.103 libs: 00:06:17.103 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:06:17.103 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:06:17.103 cryptodev, dmadev, power, reorder, security, vhost, 00:06:17.103 00:06:17.103 Message: 00:06:17.103 =============== 00:06:17.103 Drivers Enabled 00:06:17.103 =============== 00:06:17.103 00:06:17.103 common: 00:06:17.103 00:06:17.103 bus: 00:06:17.103 pci, vdev, 00:06:17.103 mempool: 00:06:17.103 ring, 00:06:17.103 dma: 00:06:17.103 00:06:17.103 net: 00:06:17.103 00:06:17.103 crypto: 00:06:17.103 00:06:17.103 compress: 00:06:17.103 00:06:17.103 vdpa: 00:06:17.103 00:06:17.103 00:06:17.103 Message: 00:06:17.103 ================= 00:06:17.103 Content Skipped 00:06:17.103 ================= 00:06:17.103 00:06:17.103 apps: 00:06:17.103 dumpcap: explicitly disabled via build config 00:06:17.103 graph: explicitly disabled via build config 00:06:17.103 pdump: explicitly disabled via build config 00:06:17.103 proc-info: explicitly disabled via build config 00:06:17.103 test-acl: explicitly disabled via build config 00:06:17.103 test-bbdev: explicitly disabled via build config 00:06:17.103 test-cmdline: explicitly disabled via build config 00:06:17.103 test-compress-perf: explicitly disabled via build config 00:06:17.103 test-crypto-perf: explicitly disabled via build config 00:06:17.103 test-dma-perf: explicitly disabled via build config 00:06:17.103 test-eventdev: explicitly disabled via build config 00:06:17.103 test-fib: explicitly disabled via build config 00:06:17.103 test-flow-perf: explicitly disabled via build config 00:06:17.103 test-gpudev: explicitly disabled via build config 00:06:17.103 test-mldev: explicitly disabled via build config 00:06:17.103 test-pipeline: explicitly disabled via build config 00:06:17.103 test-pmd: explicitly disabled via build config 00:06:17.103 test-regex: explicitly disabled via build config 00:06:17.103 test-sad: explicitly disabled via build config 00:06:17.103 test-security-perf: explicitly disabled via build config 00:06:17.103 00:06:17.103 libs: 00:06:17.103 argparse: explicitly disabled via build config 00:06:17.103 metrics: explicitly disabled via build config 00:06:17.103 acl: explicitly disabled via build config 00:06:17.103 bbdev: explicitly disabled via build config 00:06:17.103 bitratestats: explicitly disabled via build config 00:06:17.103 bpf: explicitly disabled via build config 00:06:17.103 cfgfile: explicitly disabled via build config 00:06:17.103 distributor: explicitly disabled via build config 00:06:17.103 efd: explicitly disabled via build config 00:06:17.104 eventdev: explicitly disabled via build config 00:06:17.104 dispatcher: explicitly disabled via build config 00:06:17.104 gpudev: explicitly disabled via build config 00:06:17.104 gro: explicitly disabled via build config 00:06:17.104 gso: explicitly disabled via build config 00:06:17.104 ip_frag: explicitly disabled via build config 00:06:17.104 jobstats: explicitly disabled via build config 00:06:17.104 latencystats: explicitly disabled via build config 00:06:17.104 lpm: explicitly disabled via build config 00:06:17.104 member: explicitly disabled via build config 00:06:17.104 pcapng: explicitly disabled via build config 00:06:17.104 rawdev: explicitly disabled via build config 00:06:17.104 regexdev: explicitly disabled via build config 00:06:17.104 mldev: explicitly disabled via build config 00:06:17.104 rib: explicitly disabled via build config 00:06:17.104 sched: explicitly disabled via build config 00:06:17.104 stack: explicitly disabled via build config 00:06:17.104 ipsec: explicitly disabled via build config 00:06:17.104 pdcp: explicitly disabled via build config 00:06:17.104 fib: explicitly disabled via build config 00:06:17.104 port: explicitly disabled via build config 00:06:17.104 pdump: explicitly disabled via build config 00:06:17.104 table: explicitly disabled via build config 00:06:17.104 pipeline: explicitly disabled via build config 00:06:17.104 graph: explicitly disabled via build config 00:06:17.104 node: explicitly disabled via build config 00:06:17.104 00:06:17.104 drivers: 00:06:17.104 common/cpt: not in enabled drivers build config 00:06:17.104 common/dpaax: not in enabled drivers build config 00:06:17.104 common/iavf: not in enabled drivers build config 00:06:17.104 common/idpf: not in enabled drivers build config 00:06:17.104 common/ionic: not in enabled drivers build config 00:06:17.104 common/mvep: not in enabled drivers build config 00:06:17.104 common/octeontx: not in enabled drivers build config 00:06:17.104 bus/auxiliary: not in enabled drivers build config 00:06:17.104 bus/cdx: not in enabled drivers build config 00:06:17.104 bus/dpaa: not in enabled drivers build config 00:06:17.104 bus/fslmc: not in enabled drivers build config 00:06:17.104 bus/ifpga: not in enabled drivers build config 00:06:17.104 bus/platform: not in enabled drivers build config 00:06:17.104 bus/uacce: not in enabled drivers build config 00:06:17.104 bus/vmbus: not in enabled drivers build config 00:06:17.104 common/cnxk: not in enabled drivers build config 00:06:17.104 common/mlx5: not in enabled drivers build config 00:06:17.104 common/nfp: not in enabled drivers build config 00:06:17.104 common/nitrox: not in enabled drivers build config 00:06:17.104 common/qat: not in enabled drivers build config 00:06:17.104 common/sfc_efx: not in enabled drivers build config 00:06:17.104 mempool/bucket: not in enabled drivers build config 00:06:17.104 mempool/cnxk: not in enabled drivers build config 00:06:17.104 mempool/dpaa: not in enabled drivers build config 00:06:17.104 mempool/dpaa2: not in enabled drivers build config 00:06:17.104 mempool/octeontx: not in enabled drivers build config 00:06:17.104 mempool/stack: not in enabled drivers build config 00:06:17.104 dma/cnxk: not in enabled drivers build config 00:06:17.104 dma/dpaa: not in enabled drivers build config 00:06:17.104 dma/dpaa2: not in enabled drivers build config 00:06:17.104 dma/hisilicon: not in enabled drivers build config 00:06:17.104 dma/idxd: not in enabled drivers build config 00:06:17.104 dma/ioat: not in enabled drivers build config 00:06:17.104 dma/skeleton: not in enabled drivers build config 00:06:17.104 net/af_packet: not in enabled drivers build config 00:06:17.104 net/af_xdp: not in enabled drivers build config 00:06:17.104 net/ark: not in enabled drivers build config 00:06:17.104 net/atlantic: not in enabled drivers build config 00:06:17.104 net/avp: not in enabled drivers build config 00:06:17.104 net/axgbe: not in enabled drivers build config 00:06:17.104 net/bnx2x: not in enabled drivers build config 00:06:17.104 net/bnxt: not in enabled drivers build config 00:06:17.104 net/bonding: not in enabled drivers build config 00:06:17.104 net/cnxk: not in enabled drivers build config 00:06:17.104 net/cpfl: not in enabled drivers build config 00:06:17.104 net/cxgbe: not in enabled drivers build config 00:06:17.104 net/dpaa: not in enabled drivers build config 00:06:17.104 net/dpaa2: not in enabled drivers build config 00:06:17.104 net/e1000: not in enabled drivers build config 00:06:17.104 net/ena: not in enabled drivers build config 00:06:17.104 net/enetc: not in enabled drivers build config 00:06:17.104 net/enetfec: not in enabled drivers build config 00:06:17.104 net/enic: not in enabled drivers build config 00:06:17.104 net/failsafe: not in enabled drivers build config 00:06:17.104 net/fm10k: not in enabled drivers build config 00:06:17.104 net/gve: not in enabled drivers build config 00:06:17.104 net/hinic: not in enabled drivers build config 00:06:17.104 net/hns3: not in enabled drivers build config 00:06:17.104 net/i40e: not in enabled drivers build config 00:06:17.104 net/iavf: not in enabled drivers build config 00:06:17.104 net/ice: not in enabled drivers build config 00:06:17.104 net/idpf: not in enabled drivers build config 00:06:17.104 net/igc: not in enabled drivers build config 00:06:17.104 net/ionic: not in enabled drivers build config 00:06:17.104 net/ipn3ke: not in enabled drivers build config 00:06:17.104 net/ixgbe: not in enabled drivers build config 00:06:17.104 net/mana: not in enabled drivers build config 00:06:17.104 net/memif: not in enabled drivers build config 00:06:17.104 net/mlx4: not in enabled drivers build config 00:06:17.104 net/mlx5: not in enabled drivers build config 00:06:17.104 net/mvneta: not in enabled drivers build config 00:06:17.104 net/mvpp2: not in enabled drivers build config 00:06:17.104 net/netvsc: not in enabled drivers build config 00:06:17.104 net/nfb: not in enabled drivers build config 00:06:17.104 net/nfp: not in enabled drivers build config 00:06:17.104 net/ngbe: not in enabled drivers build config 00:06:17.104 net/null: not in enabled drivers build config 00:06:17.104 net/octeontx: not in enabled drivers build config 00:06:17.104 net/octeon_ep: not in enabled drivers build config 00:06:17.104 net/pcap: not in enabled drivers build config 00:06:17.104 net/pfe: not in enabled drivers build config 00:06:17.104 net/qede: not in enabled drivers build config 00:06:17.104 net/ring: not in enabled drivers build config 00:06:17.104 net/sfc: not in enabled drivers build config 00:06:17.104 net/softnic: not in enabled drivers build config 00:06:17.104 net/tap: not in enabled drivers build config 00:06:17.104 net/thunderx: not in enabled drivers build config 00:06:17.104 net/txgbe: not in enabled drivers build config 00:06:17.104 net/vdev_netvsc: not in enabled drivers build config 00:06:17.104 net/vhost: not in enabled drivers build config 00:06:17.104 net/virtio: not in enabled drivers build config 00:06:17.104 net/vmxnet3: not in enabled drivers build config 00:06:17.104 raw/*: missing internal dependency, "rawdev" 00:06:17.104 crypto/armv8: not in enabled drivers build config 00:06:17.104 crypto/bcmfs: not in enabled drivers build config 00:06:17.104 crypto/caam_jr: not in enabled drivers build config 00:06:17.104 crypto/ccp: not in enabled drivers build config 00:06:17.104 crypto/cnxk: not in enabled drivers build config 00:06:17.104 crypto/dpaa_sec: not in enabled drivers build config 00:06:17.104 crypto/dpaa2_sec: not in enabled drivers build config 00:06:17.104 crypto/ipsec_mb: not in enabled drivers build config 00:06:17.104 crypto/mlx5: not in enabled drivers build config 00:06:17.104 crypto/mvsam: not in enabled drivers build config 00:06:17.104 crypto/nitrox: not in enabled drivers build config 00:06:17.104 crypto/null: not in enabled drivers build config 00:06:17.104 crypto/octeontx: not in enabled drivers build config 00:06:17.104 crypto/openssl: not in enabled drivers build config 00:06:17.104 crypto/scheduler: not in enabled drivers build config 00:06:17.104 crypto/uadk: not in enabled drivers build config 00:06:17.104 crypto/virtio: not in enabled drivers build config 00:06:17.104 compress/isal: not in enabled drivers build config 00:06:17.104 compress/mlx5: not in enabled drivers build config 00:06:17.104 compress/nitrox: not in enabled drivers build config 00:06:17.104 compress/octeontx: not in enabled drivers build config 00:06:17.104 compress/zlib: not in enabled drivers build config 00:06:17.104 regex/*: missing internal dependency, "regexdev" 00:06:17.104 ml/*: missing internal dependency, "mldev" 00:06:17.104 vdpa/ifc: not in enabled drivers build config 00:06:17.104 vdpa/mlx5: not in enabled drivers build config 00:06:17.104 vdpa/nfp: not in enabled drivers build config 00:06:17.104 vdpa/sfc: not in enabled drivers build config 00:06:17.104 event/*: missing internal dependency, "eventdev" 00:06:17.104 baseband/*: missing internal dependency, "bbdev" 00:06:17.104 gpu/*: missing internal dependency, "gpudev" 00:06:17.104 00:06:17.104 00:06:17.104 Build targets in project: 85 00:06:17.104 00:06:17.104 DPDK 24.03.0 00:06:17.104 00:06:17.104 User defined options 00:06:17.104 buildtype : debug 00:06:17.104 default_library : shared 00:06:17.104 libdir : lib 00:06:17.104 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:17.104 b_sanitize : address 00:06:17.104 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:06:17.104 c_link_args : 00:06:17.104 cpu_instruction_set: native 00:06:17.104 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:06:17.104 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:06:17.104 enable_docs : false 00:06:17.104 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:06:17.104 enable_kmods : false 00:06:17.104 max_lcores : 128 00:06:17.104 tests : false 00:06:17.104 00:06:17.104 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:17.104 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:06:17.104 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:06:17.104 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:06:17.104 [3/268] Linking static target lib/librte_kvargs.a 00:06:17.104 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:06:17.104 [5/268] Linking static target lib/librte_log.a 00:06:17.104 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:06:17.104 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:06:17.104 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:06:17.104 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:06:17.104 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:06:17.104 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:06:17.104 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:06:17.104 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:06:17.104 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:06:17.104 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:06:17.104 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:06:17.104 [17/268] Linking static target lib/librte_telemetry.a 00:06:17.363 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:06:17.363 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:06:17.363 [20/268] Linking target lib/librte_log.so.24.1 00:06:17.622 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:06:17.622 [22/268] Linking target lib/librte_kvargs.so.24.1 00:06:17.881 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:06:17.881 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:06:17.881 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:06:17.881 [26/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:06:18.207 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:06:18.207 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:06:18.207 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:06:18.207 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:06:18.207 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:06:18.207 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:06:18.207 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:06:18.207 [34/268] Linking target lib/librte_telemetry.so.24.1 00:06:18.466 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:06:18.466 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:06:18.466 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:06:18.724 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:06:18.983 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:06:18.983 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:06:18.983 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:06:18.983 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:06:18.983 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:06:19.241 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:06:19.241 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:06:19.241 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:06:19.499 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:06:19.499 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:06:19.499 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:06:19.757 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:06:19.757 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:06:20.015 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:06:20.015 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:06:20.015 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:06:20.273 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:06:20.273 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:06:20.273 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:06:20.273 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:06:20.531 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:06:20.531 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:06:20.531 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:06:20.531 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:06:20.531 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:06:20.531 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:06:20.790 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:06:21.048 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:06:21.048 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:06:21.048 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:06:21.048 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:06:21.307 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:06:21.307 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:06:21.307 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:06:21.307 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:06:21.307 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:06:21.307 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:06:21.565 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:06:21.823 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:06:21.823 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:06:21.823 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:06:22.082 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:06:22.082 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:06:22.082 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:06:22.082 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:06:22.082 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:06:22.340 [85/268] Linking static target lib/librte_eal.a 00:06:22.340 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:06:22.340 [87/268] Linking static target lib/librte_ring.a 00:06:22.599 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:06:22.599 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:06:22.599 [90/268] Linking static target lib/librte_rcu.a 00:06:22.857 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:06:22.857 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:06:22.857 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:06:22.857 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:06:22.857 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:06:23.115 [96/268] Linking static target lib/librte_mempool.a 00:06:23.115 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:06:23.373 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:06:23.373 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:06:23.373 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:06:23.373 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:06:23.950 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:06:23.950 [103/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:06:23.950 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:06:23.950 [105/268] Linking static target lib/librte_mbuf.a 00:06:23.950 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:06:24.266 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:06:24.266 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:06:24.266 [109/268] Linking static target lib/librte_net.a 00:06:24.266 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:06:24.266 [111/268] Linking static target lib/librte_meter.a 00:06:24.266 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:06:24.266 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:06:24.524 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:06:24.524 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:06:24.781 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:06:24.781 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:06:25.038 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:06:25.360 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:06:25.360 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:06:25.926 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:06:25.926 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:06:25.926 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:06:25.926 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:06:26.183 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:06:26.183 [126/268] Linking static target lib/librte_pci.a 00:06:26.183 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:06:26.183 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:06:26.440 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:06:26.440 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:06:26.440 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:06:26.440 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:06:26.440 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:26.697 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:06:26.697 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:06:26.697 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:06:26.697 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:06:26.697 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:06:26.697 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:06:26.955 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:06:26.955 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:06:26.955 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:06:26.955 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:06:26.955 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:06:26.955 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:06:26.955 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:06:26.955 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:06:26.955 [148/268] Linking static target lib/librte_cmdline.a 00:06:27.519 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:06:27.519 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:06:27.777 [151/268] Linking static target lib/librte_ethdev.a 00:06:27.777 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:06:27.777 [153/268] Linking static target lib/librte_timer.a 00:06:27.777 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:06:27.777 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:06:27.777 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:06:27.777 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:06:28.710 [158/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:06:28.710 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:06:28.710 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:06:28.710 [161/268] Linking static target lib/librte_hash.a 00:06:28.710 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:06:28.710 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:06:28.710 [164/268] Linking static target lib/librte_compressdev.a 00:06:28.969 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:06:28.969 [166/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:06:28.969 [167/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:06:28.969 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:06:29.227 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:06:29.227 [170/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:06:29.227 [171/268] Linking static target lib/librte_dmadev.a 00:06:29.794 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:06:29.794 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:06:29.794 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.053 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:06:30.053 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:06:30.053 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.311 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.311 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:06:30.311 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:06:30.569 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:06:30.569 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:06:30.828 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:06:30.828 [184/268] Linking static target lib/librte_power.a 00:06:31.086 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:06:31.086 [186/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:06:31.086 [187/268] Linking static target lib/librte_reorder.a 00:06:31.086 [188/268] Linking static target lib/librte_cryptodev.a 00:06:31.086 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:06:31.086 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:06:31.345 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:06:31.672 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:06:31.672 [193/268] Linking static target lib/librte_security.a 00:06:31.672 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:06:31.946 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:06:32.204 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:06:32.464 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:06:32.464 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:06:32.723 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:06:32.723 [200/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:06:32.723 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:06:33.290 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:06:33.548 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:06:33.548 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:06:33.807 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:06:33.807 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:06:33.807 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:06:34.066 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:06:34.066 [209/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:34.066 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:06:34.066 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:06:34.323 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:06:34.323 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:34.323 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:34.581 [215/268] Linking static target drivers/librte_bus_vdev.a 00:06:34.581 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:06:34.581 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:34.581 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:34.581 [219/268] Linking static target drivers/librte_bus_pci.a 00:06:34.840 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:06:34.840 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:06:34.840 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:35.098 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:06:35.098 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:35.098 [225/268] Linking static target drivers/librte_mempool_ring.a 00:06:35.098 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:35.098 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:36.032 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:06:36.032 [229/268] Linking target lib/librte_eal.so.24.1 00:06:36.289 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:06:36.289 [231/268] Linking target lib/librte_ring.so.24.1 00:06:36.289 [232/268] Linking target lib/librte_meter.so.24.1 00:06:36.289 [233/268] Linking target lib/librte_dmadev.so.24.1 00:06:36.289 [234/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:06:36.289 [235/268] Linking target lib/librte_timer.so.24.1 00:06:36.289 [236/268] Linking target lib/librte_pci.so.24.1 00:06:36.289 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:06:36.547 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:06:36.547 [239/268] Linking target lib/librte_mempool.so.24.1 00:06:36.547 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:06:36.547 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:06:36.547 [242/268] Linking target lib/librte_rcu.so.24.1 00:06:36.547 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:06:36.547 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:06:36.547 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:06:36.804 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:06:36.804 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:06:36.804 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:06:36.804 [249/268] Linking target lib/librte_mbuf.so.24.1 00:06:37.062 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:06:37.062 [251/268] Linking target lib/librte_reorder.so.24.1 00:06:37.062 [252/268] Linking target lib/librte_compressdev.so.24.1 00:06:37.062 [253/268] Linking target lib/librte_net.so.24.1 00:06:37.062 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:06:37.321 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:06:37.321 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:06:37.321 [257/268] Linking target lib/librte_cmdline.so.24.1 00:06:37.321 [258/268] Linking target lib/librte_hash.so.24.1 00:06:37.321 [259/268] Linking target lib/librte_security.so.24.1 00:06:37.579 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:06:37.580 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:37.839 [262/268] Linking target lib/librte_ethdev.so.24.1 00:06:37.839 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:06:38.101 [264/268] Linking target lib/librte_power.so.24.1 00:06:40.624 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:06:40.625 [266/268] Linking static target lib/librte_vhost.a 00:06:42.524 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:06:42.524 [268/268] Linking target lib/librte_vhost.so.24.1 00:06:42.524 INFO: autodetecting backend as ninja 00:06:42.524 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:07:09.085 CC lib/log/log.o 00:07:09.085 CC lib/log/log_flags.o 00:07:09.085 CC lib/ut/ut.o 00:07:09.085 CC lib/log/log_deprecated.o 00:07:09.085 CC lib/ut_mock/mock.o 00:07:09.085 LIB libspdk_ut_mock.a 00:07:09.085 SO libspdk_ut_mock.so.6.0 00:07:09.085 LIB libspdk_log.a 00:07:09.085 LIB libspdk_ut.a 00:07:09.085 SYMLINK libspdk_ut_mock.so 00:07:09.085 SO libspdk_log.so.7.1 00:07:09.085 SO libspdk_ut.so.2.0 00:07:09.085 SYMLINK libspdk_ut.so 00:07:09.085 SYMLINK libspdk_log.so 00:07:09.085 CC lib/ioat/ioat.o 00:07:09.085 CC lib/util/base64.o 00:07:09.085 CC lib/dma/dma.o 00:07:09.085 CC lib/util/bit_array.o 00:07:09.085 CC lib/util/cpuset.o 00:07:09.085 CXX lib/trace_parser/trace.o 00:07:09.085 CC lib/util/crc16.o 00:07:09.085 CC lib/util/crc32.o 00:07:09.085 CC lib/util/crc32c.o 00:07:09.085 CC lib/vfio_user/host/vfio_user_pci.o 00:07:09.085 CC lib/util/crc32_ieee.o 00:07:09.085 CC lib/util/crc64.o 00:07:09.085 CC lib/vfio_user/host/vfio_user.o 00:07:09.085 CC lib/util/dif.o 00:07:09.085 CC lib/util/fd.o 00:07:09.085 CC lib/util/fd_group.o 00:07:09.085 LIB libspdk_dma.a 00:07:09.085 CC lib/util/file.o 00:07:09.085 CC lib/util/hexlify.o 00:07:09.085 SO libspdk_dma.so.5.0 00:07:09.085 CC lib/util/iov.o 00:07:09.085 CC lib/util/math.o 00:07:09.085 LIB libspdk_ioat.a 00:07:09.085 SO libspdk_ioat.so.7.0 00:07:09.085 SYMLINK libspdk_dma.so 00:07:09.085 SYMLINK libspdk_ioat.so 00:07:09.085 CC lib/util/net.o 00:07:09.085 CC lib/util/pipe.o 00:07:09.085 CC lib/util/strerror_tls.o 00:07:09.085 CC lib/util/string.o 00:07:09.085 CC lib/util/uuid.o 00:07:09.085 LIB libspdk_vfio_user.a 00:07:09.085 CC lib/util/xor.o 00:07:09.085 CC lib/util/zipf.o 00:07:09.085 SO libspdk_vfio_user.so.5.0 00:07:09.085 CC lib/util/md5.o 00:07:09.085 SYMLINK libspdk_vfio_user.so 00:07:09.343 LIB libspdk_util.a 00:07:09.602 LIB libspdk_trace_parser.a 00:07:09.602 SO libspdk_util.so.10.1 00:07:09.602 SO libspdk_trace_parser.so.6.0 00:07:09.602 SYMLINK libspdk_util.so 00:07:09.602 SYMLINK libspdk_trace_parser.so 00:07:09.860 CC lib/json/json_parse.o 00:07:09.860 CC lib/json/json_util.o 00:07:09.860 CC lib/conf/conf.o 00:07:09.860 CC lib/json/json_write.o 00:07:09.860 CC lib/idxd/idxd.o 00:07:09.860 CC lib/idxd/idxd_user.o 00:07:09.860 CC lib/idxd/idxd_kernel.o 00:07:09.860 CC lib/env_dpdk/env.o 00:07:09.860 CC lib/rdma_utils/rdma_utils.o 00:07:09.860 CC lib/vmd/vmd.o 00:07:10.118 CC lib/vmd/led.o 00:07:10.118 LIB libspdk_conf.a 00:07:10.118 CC lib/env_dpdk/memory.o 00:07:10.377 SO libspdk_conf.so.6.0 00:07:10.377 SYMLINK libspdk_conf.so 00:07:10.377 CC lib/env_dpdk/pci.o 00:07:10.377 LIB libspdk_rdma_utils.a 00:07:10.377 CC lib/env_dpdk/init.o 00:07:10.377 CC lib/env_dpdk/threads.o 00:07:10.377 CC lib/env_dpdk/pci_ioat.o 00:07:10.377 SO libspdk_rdma_utils.so.1.0 00:07:10.377 LIB libspdk_json.a 00:07:10.377 SO libspdk_json.so.6.0 00:07:10.377 SYMLINK libspdk_rdma_utils.so 00:07:10.377 CC lib/env_dpdk/pci_virtio.o 00:07:10.658 SYMLINK libspdk_json.so 00:07:10.658 CC lib/env_dpdk/pci_vmd.o 00:07:10.658 CC lib/env_dpdk/pci_idxd.o 00:07:10.916 CC lib/env_dpdk/pci_event.o 00:07:10.916 CC lib/rdma_provider/common.o 00:07:10.916 CC lib/jsonrpc/jsonrpc_server.o 00:07:10.916 CC lib/env_dpdk/sigbus_handler.o 00:07:10.916 LIB libspdk_idxd.a 00:07:10.916 SO libspdk_idxd.so.12.1 00:07:11.175 SYMLINK libspdk_idxd.so 00:07:11.175 CC lib/env_dpdk/pci_dpdk.o 00:07:11.175 CC lib/env_dpdk/pci_dpdk_2207.o 00:07:11.175 CC lib/env_dpdk/pci_dpdk_2211.o 00:07:11.175 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:11.175 CC lib/jsonrpc/jsonrpc_client.o 00:07:11.175 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:11.175 CC lib/rdma_provider/rdma_provider_verbs.o 00:07:11.175 LIB libspdk_vmd.a 00:07:11.433 SO libspdk_vmd.so.6.0 00:07:11.433 SYMLINK libspdk_vmd.so 00:07:11.691 LIB libspdk_rdma_provider.a 00:07:11.692 LIB libspdk_jsonrpc.a 00:07:11.692 SO libspdk_rdma_provider.so.7.0 00:07:11.692 SO libspdk_jsonrpc.so.6.0 00:07:11.692 SYMLINK libspdk_rdma_provider.so 00:07:11.692 SYMLINK libspdk_jsonrpc.so 00:07:11.950 CC lib/rpc/rpc.o 00:07:12.208 LIB libspdk_rpc.a 00:07:12.467 SO libspdk_rpc.so.6.0 00:07:12.467 SYMLINK libspdk_rpc.so 00:07:12.725 LIB libspdk_env_dpdk.a 00:07:12.725 CC lib/notify/notify.o 00:07:12.725 CC lib/notify/notify_rpc.o 00:07:12.725 CC lib/keyring/keyring.o 00:07:12.725 CC lib/keyring/keyring_rpc.o 00:07:12.725 CC lib/trace/trace.o 00:07:12.725 CC lib/trace/trace_flags.o 00:07:12.725 CC lib/trace/trace_rpc.o 00:07:12.725 SO libspdk_env_dpdk.so.15.1 00:07:12.984 LIB libspdk_notify.a 00:07:12.984 SYMLINK libspdk_env_dpdk.so 00:07:12.984 SO libspdk_notify.so.6.0 00:07:12.984 LIB libspdk_trace.a 00:07:12.984 SYMLINK libspdk_notify.so 00:07:12.984 SO libspdk_trace.so.11.0 00:07:12.984 LIB libspdk_keyring.a 00:07:13.242 SO libspdk_keyring.so.2.0 00:07:13.242 SYMLINK libspdk_trace.so 00:07:13.242 SYMLINK libspdk_keyring.so 00:07:13.500 CC lib/thread/thread.o 00:07:13.500 CC lib/thread/iobuf.o 00:07:13.500 CC lib/sock/sock.o 00:07:13.500 CC lib/sock/sock_rpc.o 00:07:14.067 LIB libspdk_sock.a 00:07:14.067 SO libspdk_sock.so.10.0 00:07:14.067 SYMLINK libspdk_sock.so 00:07:14.325 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:14.325 CC lib/nvme/nvme_ctrlr.o 00:07:14.325 CC lib/nvme/nvme_fabric.o 00:07:14.325 CC lib/nvme/nvme_ns_cmd.o 00:07:14.325 CC lib/nvme/nvme_ns.o 00:07:14.325 CC lib/nvme/nvme_pcie_common.o 00:07:14.325 CC lib/nvme/nvme_pcie.o 00:07:14.325 CC lib/nvme/nvme_qpair.o 00:07:14.325 CC lib/nvme/nvme.o 00:07:15.259 CC lib/nvme/nvme_quirks.o 00:07:15.517 CC lib/nvme/nvme_transport.o 00:07:15.517 CC lib/nvme/nvme_discovery.o 00:07:15.517 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:15.517 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:15.517 LIB libspdk_thread.a 00:07:15.824 SO libspdk_thread.so.11.0 00:07:15.824 CC lib/nvme/nvme_tcp.o 00:07:15.824 CC lib/nvme/nvme_opal.o 00:07:15.824 CC lib/nvme/nvme_io_msg.o 00:07:15.824 SYMLINK libspdk_thread.so 00:07:15.824 CC lib/nvme/nvme_poll_group.o 00:07:16.105 CC lib/nvme/nvme_zns.o 00:07:16.364 CC lib/nvme/nvme_stubs.o 00:07:16.364 CC lib/nvme/nvme_auth.o 00:07:16.364 CC lib/nvme/nvme_cuse.o 00:07:16.364 CC lib/nvme/nvme_rdma.o 00:07:16.621 CC lib/accel/accel.o 00:07:16.621 CC lib/blob/blobstore.o 00:07:16.892 CC lib/init/json_config.o 00:07:16.892 CC lib/accel/accel_rpc.o 00:07:17.151 CC lib/accel/accel_sw.o 00:07:17.151 CC lib/virtio/virtio.o 00:07:17.151 CC lib/init/subsystem.o 00:07:17.408 CC lib/init/subsystem_rpc.o 00:07:17.408 CC lib/init/rpc.o 00:07:17.408 CC lib/blob/request.o 00:07:17.666 CC lib/virtio/virtio_vhost_user.o 00:07:17.666 CC lib/virtio/virtio_vfio_user.o 00:07:17.666 LIB libspdk_init.a 00:07:17.666 SO libspdk_init.so.6.0 00:07:17.666 CC lib/blob/zeroes.o 00:07:17.666 CC lib/fsdev/fsdev.o 00:07:17.666 SYMLINK libspdk_init.so 00:07:17.666 CC lib/blob/blob_bs_dev.o 00:07:17.925 CC lib/virtio/virtio_pci.o 00:07:17.925 CC lib/fsdev/fsdev_io.o 00:07:17.925 CC lib/fsdev/fsdev_rpc.o 00:07:17.925 CC lib/event/app.o 00:07:18.184 CC lib/event/reactor.o 00:07:18.184 CC lib/event/log_rpc.o 00:07:18.184 CC lib/event/app_rpc.o 00:07:18.184 LIB libspdk_accel.a 00:07:18.184 SO libspdk_accel.so.16.0 00:07:18.184 LIB libspdk_virtio.a 00:07:18.184 SO libspdk_virtio.so.7.0 00:07:18.184 LIB libspdk_nvme.a 00:07:18.442 CC lib/event/scheduler_static.o 00:07:18.442 SYMLINK libspdk_accel.so 00:07:18.442 SYMLINK libspdk_virtio.so 00:07:18.442 CC lib/bdev/bdev.o 00:07:18.442 CC lib/bdev/bdev_rpc.o 00:07:18.442 CC lib/bdev/bdev_zone.o 00:07:18.442 SO libspdk_nvme.so.15.0 00:07:18.700 CC lib/bdev/part.o 00:07:18.700 CC lib/bdev/scsi_nvme.o 00:07:18.700 LIB libspdk_fsdev.a 00:07:18.700 SO libspdk_fsdev.so.2.0 00:07:18.700 LIB libspdk_event.a 00:07:18.700 SYMLINK libspdk_fsdev.so 00:07:18.700 SO libspdk_event.so.14.0 00:07:18.973 SYMLINK libspdk_event.so 00:07:18.974 SYMLINK libspdk_nvme.so 00:07:18.974 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:07:19.908 LIB libspdk_fuse_dispatcher.a 00:07:19.908 SO libspdk_fuse_dispatcher.so.1.0 00:07:20.165 SYMLINK libspdk_fuse_dispatcher.so 00:07:21.538 LIB libspdk_blob.a 00:07:21.538 SO libspdk_blob.so.12.0 00:07:21.538 SYMLINK libspdk_blob.so 00:07:21.796 CC lib/blobfs/blobfs.o 00:07:21.796 CC lib/blobfs/tree.o 00:07:21.796 CC lib/lvol/lvol.o 00:07:22.735 LIB libspdk_bdev.a 00:07:22.735 SO libspdk_bdev.so.17.0 00:07:22.993 SYMLINK libspdk_bdev.so 00:07:22.993 CC lib/nvmf/ctrlr.o 00:07:22.994 CC lib/nvmf/ctrlr_discovery.o 00:07:22.994 CC lib/nvmf/ctrlr_bdev.o 00:07:22.994 CC lib/nvmf/subsystem.o 00:07:22.994 CC lib/nbd/nbd.o 00:07:22.994 CC lib/ftl/ftl_core.o 00:07:22.994 CC lib/scsi/dev.o 00:07:22.994 CC lib/ublk/ublk.o 00:07:23.251 LIB libspdk_blobfs.a 00:07:23.251 SO libspdk_blobfs.so.11.0 00:07:23.251 LIB libspdk_lvol.a 00:07:23.251 SO libspdk_lvol.so.11.0 00:07:23.510 SYMLINK libspdk_blobfs.so 00:07:23.510 SYMLINK libspdk_lvol.so 00:07:23.510 CC lib/nvmf/nvmf.o 00:07:23.510 CC lib/nvmf/nvmf_rpc.o 00:07:23.510 CC lib/scsi/lun.o 00:07:23.768 CC lib/nbd/nbd_rpc.o 00:07:23.768 CC lib/scsi/port.o 00:07:24.027 LIB libspdk_nbd.a 00:07:24.027 SO libspdk_nbd.so.7.0 00:07:24.027 CC lib/scsi/scsi.o 00:07:24.027 CC lib/ftl/ftl_init.o 00:07:24.027 SYMLINK libspdk_nbd.so 00:07:24.027 CC lib/ftl/ftl_layout.o 00:07:24.027 CC lib/ublk/ublk_rpc.o 00:07:24.339 CC lib/scsi/scsi_bdev.o 00:07:24.339 CC lib/ftl/ftl_debug.o 00:07:24.339 CC lib/ftl/ftl_io.o 00:07:24.339 CC lib/ftl/ftl_sb.o 00:07:24.598 CC lib/ftl/ftl_l2p.o 00:07:24.598 LIB libspdk_ublk.a 00:07:24.598 CC lib/ftl/ftl_l2p_flat.o 00:07:24.598 SO libspdk_ublk.so.3.0 00:07:24.856 CC lib/scsi/scsi_pr.o 00:07:24.856 SYMLINK libspdk_ublk.so 00:07:24.856 CC lib/scsi/scsi_rpc.o 00:07:24.856 CC lib/scsi/task.o 00:07:24.856 CC lib/ftl/ftl_nv_cache.o 00:07:24.856 CC lib/ftl/ftl_band.o 00:07:24.856 CC lib/ftl/ftl_band_ops.o 00:07:24.856 CC lib/nvmf/transport.o 00:07:24.856 CC lib/nvmf/tcp.o 00:07:24.856 CC lib/ftl/ftl_writer.o 00:07:25.114 CC lib/nvmf/stubs.o 00:07:25.373 LIB libspdk_scsi.a 00:07:25.373 CC lib/nvmf/mdns_server.o 00:07:25.373 CC lib/nvmf/rdma.o 00:07:25.373 SO libspdk_scsi.so.9.0 00:07:25.373 SYMLINK libspdk_scsi.so 00:07:25.373 CC lib/ftl/ftl_rq.o 00:07:25.373 CC lib/ftl/ftl_reloc.o 00:07:25.631 CC lib/ftl/ftl_l2p_cache.o 00:07:25.631 CC lib/ftl/ftl_p2l.o 00:07:25.890 CC lib/iscsi/conn.o 00:07:25.890 CC lib/iscsi/init_grp.o 00:07:25.890 CC lib/ftl/ftl_p2l_log.o 00:07:25.890 CC lib/vhost/vhost.o 00:07:26.148 CC lib/iscsi/iscsi.o 00:07:26.406 CC lib/nvmf/auth.o 00:07:26.406 CC lib/ftl/mngt/ftl_mngt.o 00:07:26.406 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:26.406 CC lib/iscsi/param.o 00:07:26.666 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:26.666 CC lib/iscsi/portal_grp.o 00:07:26.666 CC lib/iscsi/tgt_node.o 00:07:26.924 CC lib/iscsi/iscsi_subsystem.o 00:07:26.924 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:26.924 CC lib/iscsi/iscsi_rpc.o 00:07:27.182 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:27.182 CC lib/vhost/vhost_rpc.o 00:07:27.441 CC lib/iscsi/task.o 00:07:27.441 CC lib/vhost/vhost_scsi.o 00:07:27.441 CC lib/vhost/vhost_blk.o 00:07:27.441 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:27.699 CC lib/vhost/rte_vhost_user.o 00:07:27.699 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:27.699 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:27.957 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:27.957 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:27.957 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:27.957 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:28.216 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:28.216 CC lib/ftl/utils/ftl_conf.o 00:07:28.216 CC lib/ftl/utils/ftl_md.o 00:07:28.216 CC lib/ftl/utils/ftl_mempool.o 00:07:28.474 CC lib/ftl/utils/ftl_bitmap.o 00:07:28.474 CC lib/ftl/utils/ftl_property.o 00:07:28.474 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:28.474 LIB libspdk_nvmf.a 00:07:28.733 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:28.733 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:28.733 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:28.733 SO libspdk_nvmf.so.20.0 00:07:28.733 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:28.733 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:28.733 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:28.991 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:28.991 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:28.991 LIB libspdk_iscsi.a 00:07:28.991 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:28.991 SYMLINK libspdk_nvmf.so 00:07:28.991 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:28.991 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:07:28.991 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:07:28.991 CC lib/ftl/base/ftl_base_dev.o 00:07:29.250 SO libspdk_iscsi.so.8.0 00:07:29.250 CC lib/ftl/base/ftl_base_bdev.o 00:07:29.250 CC lib/ftl/ftl_trace.o 00:07:29.509 SYMLINK libspdk_iscsi.so 00:07:29.509 LIB libspdk_ftl.a 00:07:29.767 LIB libspdk_vhost.a 00:07:29.767 SO libspdk_vhost.so.8.0 00:07:29.767 SO libspdk_ftl.so.9.0 00:07:30.025 SYMLINK libspdk_vhost.so 00:07:30.285 SYMLINK libspdk_ftl.so 00:07:30.545 CC module/env_dpdk/env_dpdk_rpc.o 00:07:30.545 CC module/fsdev/aio/fsdev_aio.o 00:07:30.545 CC module/sock/posix/posix.o 00:07:30.545 CC module/keyring/file/keyring.o 00:07:30.545 CC module/keyring/linux/keyring.o 00:07:30.545 CC module/scheduler/gscheduler/gscheduler.o 00:07:30.545 CC module/blob/bdev/blob_bdev.o 00:07:30.545 CC module/accel/error/accel_error.o 00:07:30.545 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:30.804 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:30.804 LIB libspdk_env_dpdk_rpc.a 00:07:30.804 SO libspdk_env_dpdk_rpc.so.6.0 00:07:30.804 CC module/keyring/linux/keyring_rpc.o 00:07:30.804 SYMLINK libspdk_env_dpdk_rpc.so 00:07:30.804 CC module/keyring/file/keyring_rpc.o 00:07:30.804 LIB libspdk_scheduler_dpdk_governor.a 00:07:30.804 LIB libspdk_scheduler_gscheduler.a 00:07:30.804 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:30.804 CC module/accel/error/accel_error_rpc.o 00:07:30.804 LIB libspdk_scheduler_dynamic.a 00:07:30.804 SO libspdk_scheduler_gscheduler.so.4.0 00:07:31.064 SO libspdk_scheduler_dynamic.so.4.0 00:07:31.064 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:31.064 LIB libspdk_keyring_linux.a 00:07:31.064 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:31.064 LIB libspdk_blob_bdev.a 00:07:31.064 LIB libspdk_keyring_file.a 00:07:31.064 SYMLINK libspdk_scheduler_gscheduler.so 00:07:31.064 CC module/fsdev/aio/linux_aio_mgr.o 00:07:31.064 SO libspdk_keyring_linux.so.1.0 00:07:31.064 SYMLINK libspdk_scheduler_dynamic.so 00:07:31.064 SO libspdk_blob_bdev.so.12.0 00:07:31.064 CC module/accel/ioat/accel_ioat.o 00:07:31.064 SO libspdk_keyring_file.so.2.0 00:07:31.064 CC module/accel/ioat/accel_ioat_rpc.o 00:07:31.064 SYMLINK libspdk_keyring_linux.so 00:07:31.064 LIB libspdk_accel_error.a 00:07:31.064 SYMLINK libspdk_blob_bdev.so 00:07:31.064 SYMLINK libspdk_keyring_file.so 00:07:31.064 SO libspdk_accel_error.so.2.0 00:07:31.322 SYMLINK libspdk_accel_error.so 00:07:31.322 LIB libspdk_accel_ioat.a 00:07:31.322 CC module/accel/dsa/accel_dsa.o 00:07:31.322 SO libspdk_accel_ioat.so.6.0 00:07:31.322 CC module/accel/iaa/accel_iaa.o 00:07:31.322 SYMLINK libspdk_accel_ioat.so 00:07:31.322 CC module/bdev/delay/vbdev_delay.o 00:07:31.322 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:31.322 CC module/bdev/gpt/gpt.o 00:07:31.322 CC module/bdev/error/vbdev_error.o 00:07:31.581 CC module/bdev/lvol/vbdev_lvol.o 00:07:31.581 CC module/blobfs/bdev/blobfs_bdev.o 00:07:31.581 LIB libspdk_fsdev_aio.a 00:07:31.581 SO libspdk_fsdev_aio.so.1.0 00:07:31.581 CC module/accel/iaa/accel_iaa_rpc.o 00:07:31.581 LIB libspdk_sock_posix.a 00:07:31.581 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:31.581 SO libspdk_sock_posix.so.6.0 00:07:31.581 CC module/bdev/gpt/vbdev_gpt.o 00:07:31.581 SYMLINK libspdk_fsdev_aio.so 00:07:31.581 CC module/accel/dsa/accel_dsa_rpc.o 00:07:31.581 CC module/bdev/error/vbdev_error_rpc.o 00:07:31.839 SYMLINK libspdk_sock_posix.so 00:07:31.839 LIB libspdk_accel_iaa.a 00:07:31.839 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:31.839 SO libspdk_accel_iaa.so.3.0 00:07:31.839 SYMLINK libspdk_accel_iaa.so 00:07:31.839 LIB libspdk_accel_dsa.a 00:07:31.839 LIB libspdk_bdev_delay.a 00:07:31.839 SO libspdk_accel_dsa.so.5.0 00:07:31.839 SO libspdk_bdev_delay.so.6.0 00:07:31.839 LIB libspdk_bdev_error.a 00:07:31.839 CC module/bdev/malloc/bdev_malloc.o 00:07:31.839 LIB libspdk_blobfs_bdev.a 00:07:32.097 SO libspdk_bdev_error.so.6.0 00:07:32.097 SYMLINK libspdk_accel_dsa.so 00:07:32.097 SO libspdk_blobfs_bdev.so.6.0 00:07:32.097 SYMLINK libspdk_bdev_delay.so 00:07:32.097 LIB libspdk_bdev_gpt.a 00:07:32.097 CC module/bdev/null/bdev_null.o 00:07:32.097 CC module/bdev/nvme/bdev_nvme.o 00:07:32.097 SO libspdk_bdev_gpt.so.6.0 00:07:32.097 SYMLINK libspdk_bdev_error.so 00:07:32.097 SYMLINK libspdk_blobfs_bdev.so 00:07:32.097 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:32.097 CC module/bdev/null/bdev_null_rpc.o 00:07:32.097 LIB libspdk_bdev_lvol.a 00:07:32.097 SYMLINK libspdk_bdev_gpt.so 00:07:32.097 SO libspdk_bdev_lvol.so.6.0 00:07:32.097 CC module/bdev/passthru/vbdev_passthru.o 00:07:32.355 CC module/bdev/raid/bdev_raid.o 00:07:32.355 SYMLINK libspdk_bdev_lvol.so 00:07:32.355 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:32.355 CC module/bdev/split/vbdev_split.o 00:07:32.355 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:32.355 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:32.355 LIB libspdk_bdev_null.a 00:07:32.355 CC module/bdev/nvme/nvme_rpc.o 00:07:32.355 CC module/bdev/aio/bdev_aio.o 00:07:32.613 SO libspdk_bdev_null.so.6.0 00:07:32.613 CC module/bdev/split/vbdev_split_rpc.o 00:07:32.613 LIB libspdk_bdev_malloc.a 00:07:32.613 SYMLINK libspdk_bdev_null.so 00:07:32.613 SO libspdk_bdev_malloc.so.6.0 00:07:32.613 SYMLINK libspdk_bdev_malloc.so 00:07:32.613 LIB libspdk_bdev_split.a 00:07:32.869 CC module/bdev/nvme/bdev_mdns_client.o 00:07:32.869 LIB libspdk_bdev_passthru.a 00:07:32.869 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:32.869 SO libspdk_bdev_split.so.6.0 00:07:32.869 SO libspdk_bdev_passthru.so.6.0 00:07:32.869 CC module/bdev/ftl/bdev_ftl.o 00:07:32.869 SYMLINK libspdk_bdev_split.so 00:07:32.869 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:32.869 SYMLINK libspdk_bdev_passthru.so 00:07:32.869 CC module/bdev/aio/bdev_aio_rpc.o 00:07:32.869 CC module/bdev/iscsi/bdev_iscsi.o 00:07:32.869 CC module/bdev/nvme/vbdev_opal.o 00:07:33.126 LIB libspdk_bdev_zone_block.a 00:07:33.126 SO libspdk_bdev_zone_block.so.6.0 00:07:33.126 LIB libspdk_bdev_aio.a 00:07:33.126 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:33.126 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:33.126 SYMLINK libspdk_bdev_zone_block.so 00:07:33.126 CC module/bdev/raid/bdev_raid_rpc.o 00:07:33.126 SO libspdk_bdev_aio.so.6.0 00:07:33.126 LIB libspdk_bdev_ftl.a 00:07:33.126 SO libspdk_bdev_ftl.so.6.0 00:07:33.126 SYMLINK libspdk_bdev_aio.so 00:07:33.126 CC module/bdev/raid/bdev_raid_sb.o 00:07:33.384 CC module/bdev/raid/raid0.o 00:07:33.384 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:33.384 SYMLINK libspdk_bdev_ftl.so 00:07:33.384 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:33.384 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:33.384 LIB libspdk_bdev_iscsi.a 00:07:33.384 SO libspdk_bdev_iscsi.so.6.0 00:07:33.384 CC module/bdev/raid/raid1.o 00:07:33.643 SYMLINK libspdk_bdev_iscsi.so 00:07:33.643 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:33.643 CC module/bdev/raid/concat.o 00:07:33.643 CC module/bdev/raid/raid5f.o 00:07:33.900 LIB libspdk_bdev_virtio.a 00:07:33.900 SO libspdk_bdev_virtio.so.6.0 00:07:33.900 SYMLINK libspdk_bdev_virtio.so 00:07:34.158 LIB libspdk_bdev_raid.a 00:07:34.416 SO libspdk_bdev_raid.so.6.0 00:07:34.416 SYMLINK libspdk_bdev_raid.so 00:07:35.786 LIB libspdk_bdev_nvme.a 00:07:35.786 SO libspdk_bdev_nvme.so.7.1 00:07:36.044 SYMLINK libspdk_bdev_nvme.so 00:07:36.609 CC module/event/subsystems/fsdev/fsdev.o 00:07:36.609 CC module/event/subsystems/vmd/vmd.o 00:07:36.609 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:36.609 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:36.609 CC module/event/subsystems/sock/sock.o 00:07:36.609 CC module/event/subsystems/scheduler/scheduler.o 00:07:36.609 CC module/event/subsystems/keyring/keyring.o 00:07:36.609 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:36.609 CC module/event/subsystems/iobuf/iobuf.o 00:07:36.609 LIB libspdk_event_fsdev.a 00:07:36.609 LIB libspdk_event_iobuf.a 00:07:36.609 SO libspdk_event_fsdev.so.1.0 00:07:36.609 LIB libspdk_event_vhost_blk.a 00:07:36.609 LIB libspdk_event_sock.a 00:07:36.609 LIB libspdk_event_keyring.a 00:07:36.866 SO libspdk_event_iobuf.so.3.0 00:07:36.866 LIB libspdk_event_vmd.a 00:07:36.867 SO libspdk_event_sock.so.5.0 00:07:36.867 SO libspdk_event_vhost_blk.so.3.0 00:07:36.867 SO libspdk_event_keyring.so.1.0 00:07:36.867 SO libspdk_event_vmd.so.6.0 00:07:36.867 LIB libspdk_event_scheduler.a 00:07:36.867 SYMLINK libspdk_event_fsdev.so 00:07:36.867 SYMLINK libspdk_event_vhost_blk.so 00:07:36.867 SYMLINK libspdk_event_sock.so 00:07:36.867 SYMLINK libspdk_event_iobuf.so 00:07:36.867 SO libspdk_event_scheduler.so.4.0 00:07:36.867 SYMLINK libspdk_event_keyring.so 00:07:36.867 SYMLINK libspdk_event_vmd.so 00:07:36.867 SYMLINK libspdk_event_scheduler.so 00:07:37.125 CC module/event/subsystems/accel/accel.o 00:07:37.385 LIB libspdk_event_accel.a 00:07:37.385 SO libspdk_event_accel.so.6.0 00:07:37.385 SYMLINK libspdk_event_accel.so 00:07:37.643 CC module/event/subsystems/bdev/bdev.o 00:07:37.902 LIB libspdk_event_bdev.a 00:07:37.902 SO libspdk_event_bdev.so.6.0 00:07:38.162 SYMLINK libspdk_event_bdev.so 00:07:38.162 CC module/event/subsystems/nbd/nbd.o 00:07:38.162 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:38.162 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:38.162 CC module/event/subsystems/scsi/scsi.o 00:07:38.162 CC module/event/subsystems/ublk/ublk.o 00:07:38.421 LIB libspdk_event_nbd.a 00:07:38.421 LIB libspdk_event_ublk.a 00:07:38.421 LIB libspdk_event_scsi.a 00:07:38.421 SO libspdk_event_nbd.so.6.0 00:07:38.421 SO libspdk_event_ublk.so.3.0 00:07:38.421 SO libspdk_event_scsi.so.6.0 00:07:38.421 SYMLINK libspdk_event_nbd.so 00:07:38.421 LIB libspdk_event_nvmf.a 00:07:38.680 SYMLINK libspdk_event_ublk.so 00:07:38.680 SYMLINK libspdk_event_scsi.so 00:07:38.680 SO libspdk_event_nvmf.so.6.0 00:07:38.680 SYMLINK libspdk_event_nvmf.so 00:07:38.938 CC module/event/subsystems/iscsi/iscsi.o 00:07:38.938 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:38.938 LIB libspdk_event_vhost_scsi.a 00:07:38.938 SO libspdk_event_vhost_scsi.so.3.0 00:07:39.197 LIB libspdk_event_iscsi.a 00:07:39.197 SO libspdk_event_iscsi.so.6.0 00:07:39.197 SYMLINK libspdk_event_vhost_scsi.so 00:07:39.197 SYMLINK libspdk_event_iscsi.so 00:07:39.455 SO libspdk.so.6.0 00:07:39.455 SYMLINK libspdk.so 00:07:39.714 CC test/rpc_client/rpc_client_test.o 00:07:39.714 CXX app/trace/trace.o 00:07:39.714 TEST_HEADER include/spdk/accel.h 00:07:39.714 TEST_HEADER include/spdk/accel_module.h 00:07:39.714 TEST_HEADER include/spdk/assert.h 00:07:39.714 TEST_HEADER include/spdk/barrier.h 00:07:39.714 TEST_HEADER include/spdk/base64.h 00:07:39.714 TEST_HEADER include/spdk/bdev.h 00:07:39.714 TEST_HEADER include/spdk/bdev_module.h 00:07:39.714 TEST_HEADER include/spdk/bdev_zone.h 00:07:39.714 TEST_HEADER include/spdk/bit_array.h 00:07:39.714 TEST_HEADER include/spdk/bit_pool.h 00:07:39.714 TEST_HEADER include/spdk/blob_bdev.h 00:07:39.714 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:39.714 TEST_HEADER include/spdk/blobfs.h 00:07:39.714 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:39.714 TEST_HEADER include/spdk/blob.h 00:07:39.714 TEST_HEADER include/spdk/conf.h 00:07:39.714 TEST_HEADER include/spdk/config.h 00:07:39.714 TEST_HEADER include/spdk/cpuset.h 00:07:39.714 TEST_HEADER include/spdk/crc16.h 00:07:39.714 TEST_HEADER include/spdk/crc32.h 00:07:39.714 TEST_HEADER include/spdk/crc64.h 00:07:39.714 TEST_HEADER include/spdk/dif.h 00:07:39.714 TEST_HEADER include/spdk/dma.h 00:07:39.714 TEST_HEADER include/spdk/endian.h 00:07:39.714 TEST_HEADER include/spdk/env_dpdk.h 00:07:39.714 TEST_HEADER include/spdk/env.h 00:07:39.714 TEST_HEADER include/spdk/event.h 00:07:39.714 TEST_HEADER include/spdk/fd_group.h 00:07:39.714 TEST_HEADER include/spdk/fd.h 00:07:39.714 TEST_HEADER include/spdk/file.h 00:07:39.714 TEST_HEADER include/spdk/fsdev.h 00:07:39.714 TEST_HEADER include/spdk/fsdev_module.h 00:07:39.714 TEST_HEADER include/spdk/ftl.h 00:07:39.714 TEST_HEADER include/spdk/fuse_dispatcher.h 00:07:39.714 CC examples/util/zipf/zipf.o 00:07:39.714 TEST_HEADER include/spdk/gpt_spec.h 00:07:39.714 CC examples/ioat/perf/perf.o 00:07:39.714 TEST_HEADER include/spdk/hexlify.h 00:07:39.714 TEST_HEADER include/spdk/histogram_data.h 00:07:39.714 TEST_HEADER include/spdk/idxd.h 00:07:39.714 TEST_HEADER include/spdk/idxd_spec.h 00:07:39.714 TEST_HEADER include/spdk/init.h 00:07:39.714 TEST_HEADER include/spdk/ioat.h 00:07:39.714 TEST_HEADER include/spdk/ioat_spec.h 00:07:39.714 TEST_HEADER include/spdk/iscsi_spec.h 00:07:39.714 CC test/thread/poller_perf/poller_perf.o 00:07:39.714 TEST_HEADER include/spdk/json.h 00:07:39.714 TEST_HEADER include/spdk/jsonrpc.h 00:07:39.714 TEST_HEADER include/spdk/keyring.h 00:07:39.714 TEST_HEADER include/spdk/keyring_module.h 00:07:39.714 TEST_HEADER include/spdk/likely.h 00:07:39.714 TEST_HEADER include/spdk/log.h 00:07:39.714 TEST_HEADER include/spdk/lvol.h 00:07:39.714 TEST_HEADER include/spdk/md5.h 00:07:39.714 TEST_HEADER include/spdk/memory.h 00:07:39.714 TEST_HEADER include/spdk/mmio.h 00:07:39.714 TEST_HEADER include/spdk/nbd.h 00:07:39.714 TEST_HEADER include/spdk/net.h 00:07:39.714 TEST_HEADER include/spdk/notify.h 00:07:39.714 TEST_HEADER include/spdk/nvme.h 00:07:39.714 TEST_HEADER include/spdk/nvme_intel.h 00:07:39.714 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:39.714 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:39.714 CC test/app/bdev_svc/bdev_svc.o 00:07:39.714 TEST_HEADER include/spdk/nvme_spec.h 00:07:39.714 TEST_HEADER include/spdk/nvme_zns.h 00:07:39.714 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:39.714 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:39.714 TEST_HEADER include/spdk/nvmf.h 00:07:39.714 TEST_HEADER include/spdk/nvmf_spec.h 00:07:39.714 TEST_HEADER include/spdk/nvmf_transport.h 00:07:39.714 CC test/dma/test_dma/test_dma.o 00:07:39.715 TEST_HEADER include/spdk/opal.h 00:07:39.715 TEST_HEADER include/spdk/opal_spec.h 00:07:39.973 TEST_HEADER include/spdk/pci_ids.h 00:07:39.973 TEST_HEADER include/spdk/pipe.h 00:07:39.973 TEST_HEADER include/spdk/queue.h 00:07:39.973 TEST_HEADER include/spdk/reduce.h 00:07:39.973 TEST_HEADER include/spdk/rpc.h 00:07:39.973 TEST_HEADER include/spdk/scheduler.h 00:07:39.973 TEST_HEADER include/spdk/scsi.h 00:07:39.973 TEST_HEADER include/spdk/scsi_spec.h 00:07:39.973 TEST_HEADER include/spdk/sock.h 00:07:39.973 TEST_HEADER include/spdk/stdinc.h 00:07:39.973 CC test/env/mem_callbacks/mem_callbacks.o 00:07:39.973 TEST_HEADER include/spdk/string.h 00:07:39.973 TEST_HEADER include/spdk/thread.h 00:07:39.973 TEST_HEADER include/spdk/trace.h 00:07:39.973 TEST_HEADER include/spdk/trace_parser.h 00:07:39.973 TEST_HEADER include/spdk/tree.h 00:07:39.973 TEST_HEADER include/spdk/ublk.h 00:07:39.973 TEST_HEADER include/spdk/util.h 00:07:39.973 TEST_HEADER include/spdk/uuid.h 00:07:39.973 TEST_HEADER include/spdk/version.h 00:07:39.973 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:39.973 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:39.973 TEST_HEADER include/spdk/vhost.h 00:07:39.973 TEST_HEADER include/spdk/vmd.h 00:07:39.973 TEST_HEADER include/spdk/xor.h 00:07:39.973 TEST_HEADER include/spdk/zipf.h 00:07:39.973 CXX test/cpp_headers/accel.o 00:07:39.973 LINK interrupt_tgt 00:07:39.973 LINK ioat_perf 00:07:39.973 LINK rpc_client_test 00:07:39.973 LINK zipf 00:07:39.973 LINK poller_perf 00:07:40.232 LINK spdk_trace 00:07:40.232 CXX test/cpp_headers/accel_module.o 00:07:40.232 LINK bdev_svc 00:07:40.232 CXX test/cpp_headers/assert.o 00:07:40.232 CXX test/cpp_headers/barrier.o 00:07:40.232 CXX test/cpp_headers/base64.o 00:07:40.232 CXX test/cpp_headers/bdev.o 00:07:40.232 CC examples/ioat/verify/verify.o 00:07:40.490 CC test/env/vtophys/vtophys.o 00:07:40.490 CXX test/cpp_headers/bdev_module.o 00:07:40.749 CC app/trace_record/trace_record.o 00:07:40.749 LINK verify 00:07:40.749 CC test/event/event_perf/event_perf.o 00:07:40.749 CC test/event/reactor/reactor.o 00:07:40.749 LINK vtophys 00:07:40.749 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:40.749 LINK mem_callbacks 00:07:40.749 LINK test_dma 00:07:40.749 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:41.008 LINK event_perf 00:07:41.008 CXX test/cpp_headers/bdev_zone.o 00:07:41.008 LINK spdk_trace_record 00:07:41.009 LINK reactor 00:07:41.267 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:41.267 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:41.267 CXX test/cpp_headers/bit_array.o 00:07:41.267 CC test/event/reactor_perf/reactor_perf.o 00:07:41.267 CC examples/thread/thread/thread_ex.o 00:07:41.267 CC test/event/app_repeat/app_repeat.o 00:07:41.268 CC test/app/histogram_perf/histogram_perf.o 00:07:41.268 CC app/nvmf_tgt/nvmf_main.o 00:07:41.268 LINK env_dpdk_post_init 00:07:41.268 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:41.268 LINK nvme_fuzz 00:07:41.527 CXX test/cpp_headers/bit_pool.o 00:07:41.527 LINK reactor_perf 00:07:41.527 LINK histogram_perf 00:07:41.527 LINK app_repeat 00:07:41.527 LINK nvmf_tgt 00:07:41.527 LINK thread 00:07:41.785 CXX test/cpp_headers/blob_bdev.o 00:07:41.785 CC test/env/memory/memory_ut.o 00:07:41.785 CC app/spdk_lspci/spdk_lspci.o 00:07:41.785 CC app/iscsi_tgt/iscsi_tgt.o 00:07:41.785 CC app/spdk_tgt/spdk_tgt.o 00:07:41.785 CXX test/cpp_headers/blobfs_bdev.o 00:07:42.044 CC test/app/jsoncat/jsoncat.o 00:07:42.044 LINK spdk_lspci 00:07:42.044 CC test/event/scheduler/scheduler.o 00:07:42.044 CC examples/sock/hello_world/hello_sock.o 00:07:42.044 LINK iscsi_tgt 00:07:42.044 LINK vhost_fuzz 00:07:42.044 LINK spdk_tgt 00:07:42.044 LINK jsoncat 00:07:42.044 CXX test/cpp_headers/blobfs.o 00:07:42.302 LINK scheduler 00:07:42.302 CC examples/vmd/lsvmd/lsvmd.o 00:07:42.302 CXX test/cpp_headers/blob.o 00:07:42.561 CC examples/idxd/perf/perf.o 00:07:42.561 CC app/spdk_nvme_perf/perf.o 00:07:42.561 LINK lsvmd 00:07:42.561 LINK hello_sock 00:07:42.561 CC examples/accel/perf/accel_perf.o 00:07:42.561 CXX test/cpp_headers/conf.o 00:07:42.561 CC examples/fsdev/hello_world/hello_fsdev.o 00:07:42.818 CXX test/cpp_headers/config.o 00:07:42.818 CXX test/cpp_headers/cpuset.o 00:07:42.818 CC test/accel/dif/dif.o 00:07:42.818 CC app/spdk_nvme_identify/identify.o 00:07:42.818 CC examples/vmd/led/led.o 00:07:42.818 LINK idxd_perf 00:07:43.076 LINK led 00:07:43.076 CXX test/cpp_headers/crc16.o 00:07:43.076 LINK hello_fsdev 00:07:43.334 LINK accel_perf 00:07:43.334 LINK memory_ut 00:07:43.334 LINK iscsi_fuzz 00:07:43.334 CC examples/blob/hello_world/hello_blob.o 00:07:43.334 CXX test/cpp_headers/crc32.o 00:07:43.334 CC app/spdk_nvme_discover/discovery_aer.o 00:07:43.593 CXX test/cpp_headers/crc64.o 00:07:43.593 CC test/env/pci/pci_ut.o 00:07:43.593 LINK hello_blob 00:07:43.593 CC test/app/stub/stub.o 00:07:43.901 CC examples/nvme/hello_world/hello_world.o 00:07:43.901 CC examples/blob/cli/blobcli.o 00:07:43.901 LINK spdk_nvme_discover 00:07:43.901 CXX test/cpp_headers/dif.o 00:07:43.901 LINK dif 00:07:43.901 CXX test/cpp_headers/dma.o 00:07:44.159 LINK spdk_nvme_identify 00:07:44.159 LINK hello_world 00:07:44.159 LINK stub 00:07:44.159 CXX test/cpp_headers/endian.o 00:07:44.159 LINK spdk_nvme_perf 00:07:44.159 CXX test/cpp_headers/env_dpdk.o 00:07:44.159 CC app/spdk_top/spdk_top.o 00:07:44.419 CC examples/nvme/reconnect/reconnect.o 00:07:44.419 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:44.419 CC examples/nvme/arbitration/arbitration.o 00:07:44.419 LINK blobcli 00:07:44.419 CC examples/bdev/hello_world/hello_bdev.o 00:07:44.419 CC examples/nvme/hotplug/hotplug.o 00:07:44.419 CXX test/cpp_headers/env.o 00:07:44.677 CC test/blobfs/mkfs/mkfs.o 00:07:44.677 LINK pci_ut 00:07:44.677 LINK hotplug 00:07:44.936 CXX test/cpp_headers/event.o 00:07:44.936 LINK reconnect 00:07:44.936 CXX test/cpp_headers/fd_group.o 00:07:44.936 LINK mkfs 00:07:44.936 LINK hello_bdev 00:07:45.194 LINK nvme_manage 00:07:45.194 CXX test/cpp_headers/fd.o 00:07:45.194 CXX test/cpp_headers/file.o 00:07:45.194 CC examples/bdev/bdevperf/bdevperf.o 00:07:45.194 CXX test/cpp_headers/fsdev.o 00:07:45.194 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:45.194 LINK arbitration 00:07:45.194 CC examples/nvme/abort/abort.o 00:07:45.453 CXX test/cpp_headers/fsdev_module.o 00:07:45.453 CXX test/cpp_headers/ftl.o 00:07:45.453 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:45.453 CXX test/cpp_headers/fuse_dispatcher.o 00:07:45.453 LINK cmb_copy 00:07:45.712 CC test/lvol/esnap/esnap.o 00:07:45.712 CXX test/cpp_headers/gpt_spec.o 00:07:45.712 LINK pmr_persistence 00:07:45.712 LINK abort 00:07:45.712 CC app/vhost/vhost.o 00:07:45.712 CC test/nvme/aer/aer.o 00:07:45.971 CC app/spdk_dd/spdk_dd.o 00:07:45.971 LINK spdk_top 00:07:45.971 CXX test/cpp_headers/hexlify.o 00:07:45.971 CC app/fio/nvme/fio_plugin.o 00:07:45.971 CXX test/cpp_headers/histogram_data.o 00:07:46.229 CXX test/cpp_headers/idxd.o 00:07:46.229 LINK vhost 00:07:46.229 LINK aer 00:07:46.488 LINK spdk_dd 00:07:46.488 CXX test/cpp_headers/idxd_spec.o 00:07:46.488 CXX test/cpp_headers/init.o 00:07:46.488 CC test/nvme/reset/reset.o 00:07:46.488 CC app/fio/bdev/fio_plugin.o 00:07:46.746 CC test/nvme/sgl/sgl.o 00:07:46.746 CC test/nvme/e2edp/nvme_dp.o 00:07:46.746 CXX test/cpp_headers/ioat.o 00:07:46.746 LINK bdevperf 00:07:46.746 CC test/nvme/overhead/overhead.o 00:07:47.006 LINK reset 00:07:47.006 LINK sgl 00:07:47.006 LINK nvme_dp 00:07:47.006 CXX test/cpp_headers/ioat_spec.o 00:07:47.006 CC test/bdev/bdevio/bdevio.o 00:07:47.264 CC examples/nvmf/nvmf/nvmf.o 00:07:47.264 LINK spdk_nvme 00:07:47.264 CC test/nvme/err_injection/err_injection.o 00:07:47.264 CXX test/cpp_headers/iscsi_spec.o 00:07:47.521 LINK overhead 00:07:47.521 CXX test/cpp_headers/json.o 00:07:47.521 CXX test/cpp_headers/jsonrpc.o 00:07:47.521 CC test/nvme/startup/startup.o 00:07:47.521 LINK spdk_bdev 00:07:47.521 CXX test/cpp_headers/keyring.o 00:07:47.521 CXX test/cpp_headers/keyring_module.o 00:07:47.521 LINK nvmf 00:07:47.521 LINK startup 00:07:47.778 LINK err_injection 00:07:47.778 CXX test/cpp_headers/likely.o 00:07:47.778 CXX test/cpp_headers/log.o 00:07:47.778 CXX test/cpp_headers/lvol.o 00:07:47.778 CC test/nvme/reserve/reserve.o 00:07:47.778 LINK bdevio 00:07:47.778 CC test/nvme/simple_copy/simple_copy.o 00:07:48.036 CXX test/cpp_headers/md5.o 00:07:48.036 CXX test/cpp_headers/memory.o 00:07:48.036 CC test/nvme/connect_stress/connect_stress.o 00:07:48.293 LINK reserve 00:07:48.293 CC test/nvme/compliance/nvme_compliance.o 00:07:48.293 CC test/nvme/boot_partition/boot_partition.o 00:07:48.293 CC test/nvme/fused_ordering/fused_ordering.o 00:07:48.293 LINK simple_copy 00:07:48.293 CXX test/cpp_headers/mmio.o 00:07:48.293 LINK connect_stress 00:07:48.293 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:48.293 CXX test/cpp_headers/nbd.o 00:07:48.293 CXX test/cpp_headers/net.o 00:07:48.293 CXX test/cpp_headers/notify.o 00:07:48.293 CXX test/cpp_headers/nvme.o 00:07:48.550 LINK fused_ordering 00:07:48.550 LINK boot_partition 00:07:48.550 CXX test/cpp_headers/nvme_intel.o 00:07:48.550 CC test/nvme/fdp/fdp.o 00:07:48.863 CXX test/cpp_headers/nvme_ocssd.o 00:07:48.863 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:48.863 CXX test/cpp_headers/nvme_spec.o 00:07:48.863 LINK doorbell_aers 00:07:48.863 CXX test/cpp_headers/nvme_zns.o 00:07:48.863 CXX test/cpp_headers/nvmf_cmd.o 00:07:48.863 LINK nvme_compliance 00:07:48.863 CC test/nvme/cuse/cuse.o 00:07:48.863 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:48.863 CXX test/cpp_headers/nvmf.o 00:07:48.863 CXX test/cpp_headers/nvmf_spec.o 00:07:49.120 CXX test/cpp_headers/nvmf_transport.o 00:07:49.120 CXX test/cpp_headers/opal.o 00:07:49.120 CXX test/cpp_headers/opal_spec.o 00:07:49.120 LINK fdp 00:07:49.120 CXX test/cpp_headers/pci_ids.o 00:07:49.120 CXX test/cpp_headers/pipe.o 00:07:49.121 CXX test/cpp_headers/queue.o 00:07:49.121 CXX test/cpp_headers/reduce.o 00:07:49.121 CXX test/cpp_headers/rpc.o 00:07:49.378 CXX test/cpp_headers/scheduler.o 00:07:49.378 CXX test/cpp_headers/scsi.o 00:07:49.378 CXX test/cpp_headers/scsi_spec.o 00:07:49.378 CXX test/cpp_headers/sock.o 00:07:49.378 CXX test/cpp_headers/stdinc.o 00:07:49.378 CXX test/cpp_headers/thread.o 00:07:49.378 CXX test/cpp_headers/string.o 00:07:49.378 CXX test/cpp_headers/trace.o 00:07:49.635 CXX test/cpp_headers/trace_parser.o 00:07:49.635 CXX test/cpp_headers/tree.o 00:07:49.635 CXX test/cpp_headers/ublk.o 00:07:49.635 CXX test/cpp_headers/util.o 00:07:49.635 CXX test/cpp_headers/uuid.o 00:07:49.635 CXX test/cpp_headers/version.o 00:07:49.635 CXX test/cpp_headers/vfio_user_pci.o 00:07:49.635 CXX test/cpp_headers/vfio_user_spec.o 00:07:49.635 CXX test/cpp_headers/vhost.o 00:07:49.635 CXX test/cpp_headers/vmd.o 00:07:49.635 CXX test/cpp_headers/xor.o 00:07:49.891 CXX test/cpp_headers/zipf.o 00:07:50.455 LINK cuse 00:07:54.710 LINK esnap 00:07:54.969 ************************************ 00:07:54.969 END TEST make 00:07:54.969 ************************************ 00:07:54.969 00:07:54.969 real 1m53.426s 00:07:54.969 user 10m22.548s 00:07:54.969 sys 1m57.365s 00:07:54.969 06:34:13 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:54.969 06:34:13 make -- common/autotest_common.sh@10 -- $ set +x 00:07:54.969 06:34:13 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:54.969 06:34:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:54.969 06:34:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:54.969 06:34:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:54.969 06:34:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:07:54.969 06:34:13 -- pm/common@44 -- $ pid=5247 00:07:54.969 06:34:13 -- pm/common@50 -- $ kill -TERM 5247 00:07:54.969 06:34:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:54.969 06:34:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:07:54.969 06:34:13 -- pm/common@44 -- $ pid=5249 00:07:54.969 06:34:13 -- pm/common@50 -- $ kill -TERM 5249 00:07:54.969 06:34:13 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:07:54.969 06:34:13 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:54.969 06:34:13 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:54.969 06:34:13 -- common/autotest_common.sh@1711 -- # lcov --version 00:07:54.969 06:34:13 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:55.228 06:34:13 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:55.228 06:34:13 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:55.228 06:34:13 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:55.228 06:34:13 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:55.228 06:34:13 -- scripts/common.sh@336 -- # IFS=.-: 00:07:55.228 06:34:13 -- scripts/common.sh@336 -- # read -ra ver1 00:07:55.228 06:34:13 -- scripts/common.sh@337 -- # IFS=.-: 00:07:55.228 06:34:13 -- scripts/common.sh@337 -- # read -ra ver2 00:07:55.228 06:34:13 -- scripts/common.sh@338 -- # local 'op=<' 00:07:55.228 06:34:13 -- scripts/common.sh@340 -- # ver1_l=2 00:07:55.228 06:34:13 -- scripts/common.sh@341 -- # ver2_l=1 00:07:55.228 06:34:13 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:55.228 06:34:13 -- scripts/common.sh@344 -- # case "$op" in 00:07:55.228 06:34:13 -- scripts/common.sh@345 -- # : 1 00:07:55.228 06:34:13 -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:55.228 06:34:13 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:55.228 06:34:13 -- scripts/common.sh@365 -- # decimal 1 00:07:55.228 06:34:13 -- scripts/common.sh@353 -- # local d=1 00:07:55.228 06:34:13 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:55.228 06:34:13 -- scripts/common.sh@355 -- # echo 1 00:07:55.228 06:34:13 -- scripts/common.sh@365 -- # ver1[v]=1 00:07:55.228 06:34:13 -- scripts/common.sh@366 -- # decimal 2 00:07:55.228 06:34:13 -- scripts/common.sh@353 -- # local d=2 00:07:55.228 06:34:13 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:55.228 06:34:13 -- scripts/common.sh@355 -- # echo 2 00:07:55.228 06:34:13 -- scripts/common.sh@366 -- # ver2[v]=2 00:07:55.228 06:34:13 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:55.228 06:34:13 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:55.228 06:34:13 -- scripts/common.sh@368 -- # return 0 00:07:55.228 06:34:13 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:55.228 06:34:13 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:55.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.228 --rc genhtml_branch_coverage=1 00:07:55.228 --rc genhtml_function_coverage=1 00:07:55.228 --rc genhtml_legend=1 00:07:55.228 --rc geninfo_all_blocks=1 00:07:55.228 --rc geninfo_unexecuted_blocks=1 00:07:55.228 00:07:55.228 ' 00:07:55.228 06:34:13 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:55.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.228 --rc genhtml_branch_coverage=1 00:07:55.228 --rc genhtml_function_coverage=1 00:07:55.228 --rc genhtml_legend=1 00:07:55.228 --rc geninfo_all_blocks=1 00:07:55.228 --rc geninfo_unexecuted_blocks=1 00:07:55.228 00:07:55.228 ' 00:07:55.228 06:34:13 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:55.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.228 --rc genhtml_branch_coverage=1 00:07:55.228 --rc genhtml_function_coverage=1 00:07:55.228 --rc genhtml_legend=1 00:07:55.228 --rc geninfo_all_blocks=1 00:07:55.228 --rc geninfo_unexecuted_blocks=1 00:07:55.228 00:07:55.228 ' 00:07:55.228 06:34:13 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:55.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.228 --rc genhtml_branch_coverage=1 00:07:55.228 --rc genhtml_function_coverage=1 00:07:55.228 --rc genhtml_legend=1 00:07:55.228 --rc geninfo_all_blocks=1 00:07:55.228 --rc geninfo_unexecuted_blocks=1 00:07:55.228 00:07:55.228 ' 00:07:55.228 06:34:13 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:55.228 06:34:13 -- nvmf/common.sh@7 -- # uname -s 00:07:55.228 06:34:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:55.228 06:34:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:55.228 06:34:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:55.228 06:34:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:55.228 06:34:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:55.228 06:34:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:55.228 06:34:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:55.228 06:34:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:55.228 06:34:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:55.228 06:34:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:55.228 06:34:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e64019f6-f285-443c-9a8b-a61da1f9d2a5 00:07:55.229 06:34:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=e64019f6-f285-443c-9a8b-a61da1f9d2a5 00:07:55.229 06:34:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:55.229 06:34:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:55.229 06:34:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:55.229 06:34:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:55.229 06:34:13 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:55.229 06:34:13 -- scripts/common.sh@15 -- # shopt -s extglob 00:07:55.229 06:34:13 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.229 06:34:13 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.229 06:34:13 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.229 06:34:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.229 06:34:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.229 06:34:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.229 06:34:13 -- paths/export.sh@5 -- # export PATH 00:07:55.229 06:34:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.229 06:34:13 -- nvmf/common.sh@51 -- # : 0 00:07:55.229 06:34:13 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:55.229 06:34:13 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:55.229 06:34:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:55.229 06:34:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:55.229 06:34:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:55.229 06:34:13 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:55.229 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:55.229 06:34:13 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:55.229 06:34:13 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:55.229 06:34:13 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:55.229 06:34:13 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:55.229 06:34:13 -- spdk/autotest.sh@32 -- # uname -s 00:07:55.229 06:34:13 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:55.229 06:34:13 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:55.229 06:34:13 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:55.229 06:34:13 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:07:55.229 06:34:13 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:55.229 06:34:13 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:55.229 06:34:13 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:55.229 06:34:13 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:55.229 06:34:13 -- spdk/autotest.sh@48 -- # udevadm_pid=54495 00:07:55.229 06:34:13 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:55.229 06:34:13 -- pm/common@17 -- # local monitor 00:07:55.229 06:34:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:55.229 06:34:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:55.229 06:34:13 -- pm/common@25 -- # sleep 1 00:07:55.229 06:34:13 -- pm/common@21 -- # date +%s 00:07:55.229 06:34:13 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:55.229 06:34:13 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733466853 00:07:55.229 06:34:13 -- pm/common@21 -- # date +%s 00:07:55.229 06:34:13 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733466853 00:07:55.229 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733466853_collect-cpu-load.pm.log 00:07:55.229 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733466853_collect-vmstat.pm.log 00:07:56.168 06:34:14 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:56.168 06:34:14 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:07:56.168 06:34:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:56.168 06:34:14 -- common/autotest_common.sh@10 -- # set +x 00:07:56.168 06:34:14 -- spdk/autotest.sh@59 -- # create_test_list 00:07:56.168 06:34:14 -- common/autotest_common.sh@752 -- # xtrace_disable 00:07:56.168 06:34:14 -- common/autotest_common.sh@10 -- # set +x 00:07:56.427 06:34:14 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:07:56.427 06:34:14 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:07:56.427 06:34:14 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:07:56.427 06:34:14 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:07:56.427 06:34:14 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:07:56.427 06:34:14 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:07:56.427 06:34:14 -- common/autotest_common.sh@1457 -- # uname 00:07:56.427 06:34:14 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:07:56.427 06:34:14 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:07:56.427 06:34:14 -- common/autotest_common.sh@1477 -- # uname 00:07:56.427 06:34:14 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:07:56.427 06:34:14 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:07:56.427 06:34:14 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:07:56.427 lcov: LCOV version 1.15 00:07:56.427 06:34:14 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:08:14.563 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:08:14.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:08:32.641 06:34:50 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:08:32.641 06:34:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:32.641 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:08:32.641 06:34:50 -- spdk/autotest.sh@78 -- # rm -f 00:08:32.641 06:34:50 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:32.641 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:32.641 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:08:32.641 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:08:32.641 06:34:50 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:08:32.641 06:34:50 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:32.641 06:34:50 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:32.641 06:34:50 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:08:32.641 06:34:50 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:08:32.641 06:34:50 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:08:32.641 06:34:50 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:32.641 06:34:50 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:08:32.641 06:34:50 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:32.641 06:34:50 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:08:32.641 06:34:50 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:32.641 06:34:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:32.641 06:34:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:32.641 06:34:50 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:32.641 06:34:50 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:08:32.641 06:34:50 -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:08:32.641 06:34:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:08:32.641 06:34:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:32.641 06:34:50 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:32.641 06:34:50 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:08:32.641 06:34:50 -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:08:32.641 06:34:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:08:32.641 06:34:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:32.641 06:34:50 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:32.641 06:34:50 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:08:32.641 06:34:50 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:32.641 06:34:50 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:08:32.641 06:34:50 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:08:32.641 06:34:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:32.641 06:34:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:32.641 06:34:50 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:08:32.641 06:34:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:32.641 06:34:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:32.641 06:34:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:08:32.641 06:34:50 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:08:32.641 06:34:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:32.641 No valid GPT data, bailing 00:08:32.641 06:34:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:32.641 06:34:50 -- scripts/common.sh@394 -- # pt= 00:08:32.641 06:34:50 -- scripts/common.sh@395 -- # return 1 00:08:32.641 06:34:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:32.641 1+0 records in 00:08:32.641 1+0 records out 00:08:32.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00409652 s, 256 MB/s 00:08:32.641 06:34:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:32.641 06:34:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:32.641 06:34:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n2 00:08:32.641 06:34:50 -- scripts/common.sh@381 -- # local block=/dev/nvme0n2 pt 00:08:32.641 06:34:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:08:32.641 No valid GPT data, bailing 00:08:32.641 06:34:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:08:32.641 06:34:51 -- scripts/common.sh@394 -- # pt= 00:08:32.641 06:34:51 -- scripts/common.sh@395 -- # return 1 00:08:32.641 06:34:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:08:32.641 1+0 records in 00:08:32.641 1+0 records out 00:08:32.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00319385 s, 328 MB/s 00:08:32.641 06:34:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:32.641 06:34:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:32.641 06:34:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n3 00:08:32.641 06:34:51 -- scripts/common.sh@381 -- # local block=/dev/nvme0n3 pt 00:08:32.641 06:34:51 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:08:32.641 No valid GPT data, bailing 00:08:32.641 06:34:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:08:32.641 06:34:51 -- scripts/common.sh@394 -- # pt= 00:08:32.641 06:34:51 -- scripts/common.sh@395 -- # return 1 00:08:32.641 06:34:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:08:32.641 1+0 records in 00:08:32.641 1+0 records out 00:08:32.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00390935 s, 268 MB/s 00:08:32.641 06:34:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:32.641 06:34:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:32.641 06:34:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:08:32.641 06:34:51 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:08:32.641 06:34:51 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:08:32.641 No valid GPT data, bailing 00:08:32.641 06:34:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:08:32.641 06:34:51 -- scripts/common.sh@394 -- # pt= 00:08:32.641 06:34:51 -- scripts/common.sh@395 -- # return 1 00:08:32.641 06:34:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:08:32.641 1+0 records in 00:08:32.641 1+0 records out 00:08:32.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00530651 s, 198 MB/s 00:08:32.641 06:34:51 -- spdk/autotest.sh@105 -- # sync 00:08:32.900 06:34:51 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:32.901 06:34:51 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:32.901 06:34:51 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:34.805 06:34:53 -- spdk/autotest.sh@111 -- # uname -s 00:08:34.805 06:34:53 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:08:34.805 06:34:53 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:08:34.805 06:34:53 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:35.742 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:35.742 Hugepages 00:08:35.742 node hugesize free / total 00:08:35.742 node0 1048576kB 0 / 0 00:08:35.742 node0 2048kB 0 / 0 00:08:35.742 00:08:35.742 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:35.742 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:35.742 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:08:35.742 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:08:35.742 06:34:54 -- spdk/autotest.sh@117 -- # uname -s 00:08:35.742 06:34:54 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:08:35.742 06:34:54 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:08:35.742 06:34:54 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:36.688 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:36.688 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:36.688 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:36.688 06:34:55 -- common/autotest_common.sh@1517 -- # sleep 1 00:08:37.641 06:34:56 -- common/autotest_common.sh@1518 -- # bdfs=() 00:08:37.641 06:34:56 -- common/autotest_common.sh@1518 -- # local bdfs 00:08:37.641 06:34:56 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:08:37.641 06:34:56 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:08:37.641 06:34:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:37.641 06:34:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:37.641 06:34:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:37.641 06:34:56 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:37.641 06:34:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:37.921 06:34:56 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:08:37.921 06:34:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:37.921 06:34:56 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:38.180 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:38.180 Waiting for block devices as requested 00:08:38.180 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:38.439 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:38.439 06:34:56 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:38.439 06:34:56 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:08:38.439 06:34:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:38.439 06:34:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:08:38.439 06:34:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:38.439 06:34:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:08:38.439 06:34:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:38.439 06:34:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:08:38.439 06:34:56 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:08:38.439 06:34:56 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:08:38.439 06:34:56 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:08:38.439 06:34:56 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:38.439 06:34:56 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:38.439 06:34:56 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:38.439 06:34:56 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:38.439 06:34:56 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:38.439 06:34:56 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:38.439 06:34:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:08:38.439 06:34:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:38.439 06:34:56 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:38.439 06:34:56 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:38.439 06:34:56 -- common/autotest_common.sh@1543 -- # continue 00:08:38.439 06:34:56 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:38.439 06:34:56 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:08:38.439 06:34:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:38.439 06:34:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:08:38.439 06:34:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:38.439 06:34:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:08:38.439 06:34:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:38.439 06:34:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:08:38.439 06:34:56 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:08:38.439 06:34:56 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:08:38.439 06:34:56 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:08:38.439 06:34:56 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:38.439 06:34:56 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:38.439 06:34:56 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:38.439 06:34:56 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:38.439 06:34:56 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:38.439 06:34:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:08:38.439 06:34:56 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:38.439 06:34:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:38.439 06:34:56 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:38.439 06:34:56 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:38.439 06:34:56 -- common/autotest_common.sh@1543 -- # continue 00:08:38.439 06:34:56 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:08:38.439 06:34:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:38.439 06:34:56 -- common/autotest_common.sh@10 -- # set +x 00:08:38.439 06:34:57 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:08:38.439 06:34:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:38.439 06:34:57 -- common/autotest_common.sh@10 -- # set +x 00:08:38.439 06:34:57 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:39.375 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:39.375 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:39.375 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:39.375 06:34:57 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:39.375 06:34:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:39.375 06:34:57 -- common/autotest_common.sh@10 -- # set +x 00:08:39.375 06:34:57 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:39.375 06:34:57 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:08:39.375 06:34:57 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:08:39.375 06:34:57 -- common/autotest_common.sh@1563 -- # bdfs=() 00:08:39.375 06:34:57 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:08:39.375 06:34:57 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:08:39.375 06:34:57 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:08:39.375 06:34:57 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:08:39.375 06:34:57 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:39.375 06:34:57 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:39.376 06:34:57 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:39.376 06:34:57 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:39.376 06:34:57 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:39.634 06:34:58 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:08:39.634 06:34:58 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:39.634 06:34:58 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:39.634 06:34:58 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:08:39.634 06:34:58 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:39.634 06:34:58 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:39.634 06:34:58 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:39.634 06:34:58 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:08:39.634 06:34:58 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:39.634 06:34:58 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:39.634 06:34:58 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:08:39.634 06:34:58 -- common/autotest_common.sh@1572 -- # return 0 00:08:39.634 06:34:58 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:08:39.634 06:34:58 -- common/autotest_common.sh@1580 -- # return 0 00:08:39.634 06:34:58 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:08:39.634 06:34:58 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:08:39.634 06:34:58 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:39.634 06:34:58 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:39.634 06:34:58 -- spdk/autotest.sh@149 -- # timing_enter lib 00:08:39.634 06:34:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:39.634 06:34:58 -- common/autotest_common.sh@10 -- # set +x 00:08:39.634 06:34:58 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:08:39.634 06:34:58 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:39.634 06:34:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:39.634 06:34:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.634 06:34:58 -- common/autotest_common.sh@10 -- # set +x 00:08:39.634 ************************************ 00:08:39.634 START TEST env 00:08:39.634 ************************************ 00:08:39.634 06:34:58 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:39.634 * Looking for test storage... 00:08:39.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:39.634 06:34:58 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:39.634 06:34:58 env -- common/autotest_common.sh@1711 -- # lcov --version 00:08:39.634 06:34:58 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:39.634 06:34:58 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:39.634 06:34:58 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.634 06:34:58 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.634 06:34:58 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.634 06:34:58 env -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.634 06:34:58 env -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.634 06:34:58 env -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.634 06:34:58 env -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.634 06:34:58 env -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.634 06:34:58 env -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.634 06:34:58 env -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.634 06:34:58 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.634 06:34:58 env -- scripts/common.sh@344 -- # case "$op" in 00:08:39.634 06:34:58 env -- scripts/common.sh@345 -- # : 1 00:08:39.634 06:34:58 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.634 06:34:58 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.634 06:34:58 env -- scripts/common.sh@365 -- # decimal 1 00:08:39.634 06:34:58 env -- scripts/common.sh@353 -- # local d=1 00:08:39.634 06:34:58 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.634 06:34:58 env -- scripts/common.sh@355 -- # echo 1 00:08:39.634 06:34:58 env -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.634 06:34:58 env -- scripts/common.sh@366 -- # decimal 2 00:08:39.634 06:34:58 env -- scripts/common.sh@353 -- # local d=2 00:08:39.635 06:34:58 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.635 06:34:58 env -- scripts/common.sh@355 -- # echo 2 00:08:39.894 06:34:58 env -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.894 06:34:58 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.894 06:34:58 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.894 06:34:58 env -- scripts/common.sh@368 -- # return 0 00:08:39.894 06:34:58 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.894 06:34:58 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:39.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.894 --rc genhtml_branch_coverage=1 00:08:39.894 --rc genhtml_function_coverage=1 00:08:39.894 --rc genhtml_legend=1 00:08:39.894 --rc geninfo_all_blocks=1 00:08:39.894 --rc geninfo_unexecuted_blocks=1 00:08:39.894 00:08:39.894 ' 00:08:39.894 06:34:58 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:39.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.894 --rc genhtml_branch_coverage=1 00:08:39.894 --rc genhtml_function_coverage=1 00:08:39.894 --rc genhtml_legend=1 00:08:39.894 --rc geninfo_all_blocks=1 00:08:39.894 --rc geninfo_unexecuted_blocks=1 00:08:39.894 00:08:39.894 ' 00:08:39.894 06:34:58 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:39.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.894 --rc genhtml_branch_coverage=1 00:08:39.894 --rc genhtml_function_coverage=1 00:08:39.894 --rc genhtml_legend=1 00:08:39.894 --rc geninfo_all_blocks=1 00:08:39.894 --rc geninfo_unexecuted_blocks=1 00:08:39.894 00:08:39.894 ' 00:08:39.894 06:34:58 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:39.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.894 --rc genhtml_branch_coverage=1 00:08:39.894 --rc genhtml_function_coverage=1 00:08:39.894 --rc genhtml_legend=1 00:08:39.894 --rc geninfo_all_blocks=1 00:08:39.894 --rc geninfo_unexecuted_blocks=1 00:08:39.894 00:08:39.894 ' 00:08:39.894 06:34:58 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:39.894 06:34:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:39.894 06:34:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.894 06:34:58 env -- common/autotest_common.sh@10 -- # set +x 00:08:39.894 ************************************ 00:08:39.894 START TEST env_memory 00:08:39.894 ************************************ 00:08:39.894 06:34:58 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:39.894 00:08:39.894 00:08:39.894 CUnit - A unit testing framework for C - Version 2.1-3 00:08:39.894 http://cunit.sourceforge.net/ 00:08:39.894 00:08:39.894 00:08:39.894 Suite: memory 00:08:39.894 Test: alloc and free memory map ...[2024-12-06 06:34:58.363320] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:39.894 passed 00:08:39.894 Test: mem map translation ...[2024-12-06 06:34:58.424626] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:39.894 [2024-12-06 06:34:58.424705] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:39.894 [2024-12-06 06:34:58.424804] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:39.894 [2024-12-06 06:34:58.424837] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:39.894 passed 00:08:39.894 Test: mem map registration ...[2024-12-06 06:34:58.526883] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:08:39.894 [2024-12-06 06:34:58.527025] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:08:40.153 passed 00:08:40.153 Test: mem map adjacent registrations ...passed 00:08:40.153 00:08:40.153 Run Summary: Type Total Ran Passed Failed Inactive 00:08:40.153 suites 1 1 n/a 0 0 00:08:40.153 tests 4 4 4 0 0 00:08:40.153 asserts 152 152 152 0 n/a 00:08:40.153 00:08:40.153 Elapsed time = 0.344 seconds 00:08:40.153 00:08:40.153 real 0m0.382s 00:08:40.153 user 0m0.353s 00:08:40.153 sys 0m0.022s 00:08:40.153 06:34:58 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.153 ************************************ 00:08:40.153 END TEST env_memory 00:08:40.153 06:34:58 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:40.153 ************************************ 00:08:40.153 06:34:58 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:40.153 06:34:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:40.153 06:34:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.153 06:34:58 env -- common/autotest_common.sh@10 -- # set +x 00:08:40.153 ************************************ 00:08:40.153 START TEST env_vtophys 00:08:40.153 ************************************ 00:08:40.153 06:34:58 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:40.153 EAL: lib.eal log level changed from notice to debug 00:08:40.153 EAL: Detected lcore 0 as core 0 on socket 0 00:08:40.153 EAL: Detected lcore 1 as core 0 on socket 0 00:08:40.153 EAL: Detected lcore 2 as core 0 on socket 0 00:08:40.153 EAL: Detected lcore 3 as core 0 on socket 0 00:08:40.153 EAL: Detected lcore 4 as core 0 on socket 0 00:08:40.153 EAL: Detected lcore 5 as core 0 on socket 0 00:08:40.153 EAL: Detected lcore 6 as core 0 on socket 0 00:08:40.153 EAL: Detected lcore 7 as core 0 on socket 0 00:08:40.153 EAL: Detected lcore 8 as core 0 on socket 0 00:08:40.153 EAL: Detected lcore 9 as core 0 on socket 0 00:08:40.153 EAL: Maximum logical cores by configuration: 128 00:08:40.153 EAL: Detected CPU lcores: 10 00:08:40.153 EAL: Detected NUMA nodes: 1 00:08:40.153 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:40.153 EAL: Detected shared linkage of DPDK 00:08:40.411 EAL: No shared files mode enabled, IPC will be disabled 00:08:40.411 EAL: Selected IOVA mode 'PA' 00:08:40.411 EAL: Probing VFIO support... 00:08:40.411 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:40.411 EAL: VFIO modules not loaded, skipping VFIO support... 00:08:40.411 EAL: Ask a virtual area of 0x2e000 bytes 00:08:40.411 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:40.411 EAL: Setting up physically contiguous memory... 00:08:40.411 EAL: Setting maximum number of open files to 524288 00:08:40.411 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:40.411 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:40.411 EAL: Ask a virtual area of 0x61000 bytes 00:08:40.411 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:40.411 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:40.411 EAL: Ask a virtual area of 0x400000000 bytes 00:08:40.411 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:40.411 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:40.411 EAL: Ask a virtual area of 0x61000 bytes 00:08:40.411 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:40.411 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:40.411 EAL: Ask a virtual area of 0x400000000 bytes 00:08:40.411 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:40.411 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:40.411 EAL: Ask a virtual area of 0x61000 bytes 00:08:40.411 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:40.411 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:40.411 EAL: Ask a virtual area of 0x400000000 bytes 00:08:40.411 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:40.411 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:40.411 EAL: Ask a virtual area of 0x61000 bytes 00:08:40.411 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:40.411 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:40.411 EAL: Ask a virtual area of 0x400000000 bytes 00:08:40.411 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:40.411 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:40.411 EAL: Hugepages will be freed exactly as allocated. 00:08:40.411 EAL: No shared files mode enabled, IPC is disabled 00:08:40.411 EAL: No shared files mode enabled, IPC is disabled 00:08:40.411 EAL: TSC frequency is ~2200000 KHz 00:08:40.411 EAL: Main lcore 0 is ready (tid=7fa1cb118a40;cpuset=[0]) 00:08:40.411 EAL: Trying to obtain current memory policy. 00:08:40.411 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:40.411 EAL: Restoring previous memory policy: 0 00:08:40.411 EAL: request: mp_malloc_sync 00:08:40.411 EAL: No shared files mode enabled, IPC is disabled 00:08:40.411 EAL: Heap on socket 0 was expanded by 2MB 00:08:40.411 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:40.411 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:40.411 EAL: Mem event callback 'spdk:(nil)' registered 00:08:40.411 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:08:40.411 00:08:40.411 00:08:40.411 CUnit - A unit testing framework for C - Version 2.1-3 00:08:40.411 http://cunit.sourceforge.net/ 00:08:40.411 00:08:40.411 00:08:40.411 Suite: components_suite 00:08:40.998 Test: vtophys_malloc_test ...passed 00:08:40.998 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:40.998 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:40.998 EAL: Restoring previous memory policy: 4 00:08:40.998 EAL: Calling mem event callback 'spdk:(nil)' 00:08:40.998 EAL: request: mp_malloc_sync 00:08:40.998 EAL: No shared files mode enabled, IPC is disabled 00:08:40.998 EAL: Heap on socket 0 was expanded by 4MB 00:08:40.998 EAL: Calling mem event callback 'spdk:(nil)' 00:08:40.998 EAL: request: mp_malloc_sync 00:08:40.998 EAL: No shared files mode enabled, IPC is disabled 00:08:40.998 EAL: Heap on socket 0 was shrunk by 4MB 00:08:40.998 EAL: Trying to obtain current memory policy. 00:08:40.998 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:40.998 EAL: Restoring previous memory policy: 4 00:08:40.998 EAL: Calling mem event callback 'spdk:(nil)' 00:08:40.998 EAL: request: mp_malloc_sync 00:08:40.998 EAL: No shared files mode enabled, IPC is disabled 00:08:40.998 EAL: Heap on socket 0 was expanded by 6MB 00:08:40.998 EAL: Calling mem event callback 'spdk:(nil)' 00:08:40.998 EAL: request: mp_malloc_sync 00:08:40.998 EAL: No shared files mode enabled, IPC is disabled 00:08:40.998 EAL: Heap on socket 0 was shrunk by 6MB 00:08:40.998 EAL: Trying to obtain current memory policy. 00:08:40.998 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:40.998 EAL: Restoring previous memory policy: 4 00:08:40.998 EAL: Calling mem event callback 'spdk:(nil)' 00:08:40.998 EAL: request: mp_malloc_sync 00:08:40.998 EAL: No shared files mode enabled, IPC is disabled 00:08:40.998 EAL: Heap on socket 0 was expanded by 10MB 00:08:40.998 EAL: Calling mem event callback 'spdk:(nil)' 00:08:40.998 EAL: request: mp_malloc_sync 00:08:40.998 EAL: No shared files mode enabled, IPC is disabled 00:08:40.998 EAL: Heap on socket 0 was shrunk by 10MB 00:08:40.998 EAL: Trying to obtain current memory policy. 00:08:40.998 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:40.999 EAL: Restoring previous memory policy: 4 00:08:40.999 EAL: Calling mem event callback 'spdk:(nil)' 00:08:40.999 EAL: request: mp_malloc_sync 00:08:40.999 EAL: No shared files mode enabled, IPC is disabled 00:08:40.999 EAL: Heap on socket 0 was expanded by 18MB 00:08:40.999 EAL: Calling mem event callback 'spdk:(nil)' 00:08:40.999 EAL: request: mp_malloc_sync 00:08:40.999 EAL: No shared files mode enabled, IPC is disabled 00:08:40.999 EAL: Heap on socket 0 was shrunk by 18MB 00:08:40.999 EAL: Trying to obtain current memory policy. 00:08:40.999 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:40.999 EAL: Restoring previous memory policy: 4 00:08:40.999 EAL: Calling mem event callback 'spdk:(nil)' 00:08:40.999 EAL: request: mp_malloc_sync 00:08:40.999 EAL: No shared files mode enabled, IPC is disabled 00:08:40.999 EAL: Heap on socket 0 was expanded by 34MB 00:08:40.999 EAL: Calling mem event callback 'spdk:(nil)' 00:08:40.999 EAL: request: mp_malloc_sync 00:08:40.999 EAL: No shared files mode enabled, IPC is disabled 00:08:40.999 EAL: Heap on socket 0 was shrunk by 34MB 00:08:41.257 EAL: Trying to obtain current memory policy. 00:08:41.257 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:41.257 EAL: Restoring previous memory policy: 4 00:08:41.257 EAL: Calling mem event callback 'spdk:(nil)' 00:08:41.257 EAL: request: mp_malloc_sync 00:08:41.257 EAL: No shared files mode enabled, IPC is disabled 00:08:41.257 EAL: Heap on socket 0 was expanded by 66MB 00:08:41.257 EAL: Calling mem event callback 'spdk:(nil)' 00:08:41.257 EAL: request: mp_malloc_sync 00:08:41.257 EAL: No shared files mode enabled, IPC is disabled 00:08:41.257 EAL: Heap on socket 0 was shrunk by 66MB 00:08:41.516 EAL: Trying to obtain current memory policy. 00:08:41.516 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:41.516 EAL: Restoring previous memory policy: 4 00:08:41.516 EAL: Calling mem event callback 'spdk:(nil)' 00:08:41.516 EAL: request: mp_malloc_sync 00:08:41.516 EAL: No shared files mode enabled, IPC is disabled 00:08:41.516 EAL: Heap on socket 0 was expanded by 130MB 00:08:41.774 EAL: Calling mem event callback 'spdk:(nil)' 00:08:41.774 EAL: request: mp_malloc_sync 00:08:41.774 EAL: No shared files mode enabled, IPC is disabled 00:08:41.774 EAL: Heap on socket 0 was shrunk by 130MB 00:08:41.774 EAL: Trying to obtain current memory policy. 00:08:41.774 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:42.033 EAL: Restoring previous memory policy: 4 00:08:42.033 EAL: Calling mem event callback 'spdk:(nil)' 00:08:42.033 EAL: request: mp_malloc_sync 00:08:42.033 EAL: No shared files mode enabled, IPC is disabled 00:08:42.033 EAL: Heap on socket 0 was expanded by 258MB 00:08:42.292 EAL: Calling mem event callback 'spdk:(nil)' 00:08:42.550 EAL: request: mp_malloc_sync 00:08:42.550 EAL: No shared files mode enabled, IPC is disabled 00:08:42.550 EAL: Heap on socket 0 was shrunk by 258MB 00:08:42.809 EAL: Trying to obtain current memory policy. 00:08:42.809 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:43.068 EAL: Restoring previous memory policy: 4 00:08:43.068 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.068 EAL: request: mp_malloc_sync 00:08:43.068 EAL: No shared files mode enabled, IPC is disabled 00:08:43.068 EAL: Heap on socket 0 was expanded by 514MB 00:08:44.004 EAL: Calling mem event callback 'spdk:(nil)' 00:08:44.004 EAL: request: mp_malloc_sync 00:08:44.004 EAL: No shared files mode enabled, IPC is disabled 00:08:44.004 EAL: Heap on socket 0 was shrunk by 514MB 00:08:44.940 EAL: Trying to obtain current memory policy. 00:08:44.940 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:45.199 EAL: Restoring previous memory policy: 4 00:08:45.199 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.199 EAL: request: mp_malloc_sync 00:08:45.199 EAL: No shared files mode enabled, IPC is disabled 00:08:45.199 EAL: Heap on socket 0 was expanded by 1026MB 00:08:47.156 EAL: Calling mem event callback 'spdk:(nil)' 00:08:47.156 EAL: request: mp_malloc_sync 00:08:47.156 EAL: No shared files mode enabled, IPC is disabled 00:08:47.156 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:49.055 passed 00:08:49.055 00:08:49.055 Run Summary: Type Total Ran Passed Failed Inactive 00:08:49.055 suites 1 1 n/a 0 0 00:08:49.055 tests 2 2 2 0 0 00:08:49.055 asserts 5789 5789 5789 0 n/a 00:08:49.055 00:08:49.055 Elapsed time = 8.129 seconds 00:08:49.055 EAL: Calling mem event callback 'spdk:(nil)' 00:08:49.055 EAL: request: mp_malloc_sync 00:08:49.055 EAL: No shared files mode enabled, IPC is disabled 00:08:49.055 EAL: Heap on socket 0 was shrunk by 2MB 00:08:49.055 EAL: No shared files mode enabled, IPC is disabled 00:08:49.055 EAL: No shared files mode enabled, IPC is disabled 00:08:49.055 EAL: No shared files mode enabled, IPC is disabled 00:08:49.055 00:08:49.055 real 0m8.490s 00:08:49.055 user 0m7.236s 00:08:49.055 sys 0m1.076s 00:08:49.055 06:35:07 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.055 06:35:07 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:49.055 ************************************ 00:08:49.055 END TEST env_vtophys 00:08:49.055 ************************************ 00:08:49.055 06:35:07 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:49.055 06:35:07 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:49.056 06:35:07 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.056 06:35:07 env -- common/autotest_common.sh@10 -- # set +x 00:08:49.056 ************************************ 00:08:49.056 START TEST env_pci 00:08:49.056 ************************************ 00:08:49.056 06:35:07 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:49.056 00:08:49.056 00:08:49.056 CUnit - A unit testing framework for C - Version 2.1-3 00:08:49.056 http://cunit.sourceforge.net/ 00:08:49.056 00:08:49.056 00:08:49.056 Suite: pci 00:08:49.056 Test: pci_hook ...[2024-12-06 06:35:07.313494] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56848 has claimed it 00:08:49.056 EAL: Cannot find device (10000:00:01.0) 00:08:49.056 passed 00:08:49.056 00:08:49.056 Run Summary: Type Total Ran Passed Failed Inactive 00:08:49.056 suites 1 1 n/a 0 0 00:08:49.056 tests 1 1 1 0 0 00:08:49.056 asserts 25 25 25 0 n/a 00:08:49.056 00:08:49.056 Elapsed time = 0.009 seconds 00:08:49.056 EAL: Failed to attach device on primary process 00:08:49.056 00:08:49.056 real 0m0.096s 00:08:49.056 user 0m0.047s 00:08:49.056 sys 0m0.046s 00:08:49.056 06:35:07 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.056 ************************************ 00:08:49.056 END TEST env_pci 00:08:49.056 ************************************ 00:08:49.056 06:35:07 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:49.056 06:35:07 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:49.056 06:35:07 env -- env/env.sh@15 -- # uname 00:08:49.056 06:35:07 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:49.056 06:35:07 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:49.056 06:35:07 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:49.056 06:35:07 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:49.056 06:35:07 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.056 06:35:07 env -- common/autotest_common.sh@10 -- # set +x 00:08:49.056 ************************************ 00:08:49.056 START TEST env_dpdk_post_init 00:08:49.056 ************************************ 00:08:49.056 06:35:07 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:49.056 EAL: Detected CPU lcores: 10 00:08:49.056 EAL: Detected NUMA nodes: 1 00:08:49.056 EAL: Detected shared linkage of DPDK 00:08:49.056 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:49.056 EAL: Selected IOVA mode 'PA' 00:08:49.056 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:49.056 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:08:49.056 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:08:49.315 Starting DPDK initialization... 00:08:49.315 Starting SPDK post initialization... 00:08:49.315 SPDK NVMe probe 00:08:49.315 Attaching to 0000:00:10.0 00:08:49.315 Attaching to 0000:00:11.0 00:08:49.315 Attached to 0000:00:10.0 00:08:49.315 Attached to 0000:00:11.0 00:08:49.315 Cleaning up... 00:08:49.315 ************************************ 00:08:49.315 END TEST env_dpdk_post_init 00:08:49.315 ************************************ 00:08:49.315 00:08:49.315 real 0m0.322s 00:08:49.315 user 0m0.114s 00:08:49.315 sys 0m0.107s 00:08:49.315 06:35:07 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.315 06:35:07 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:49.315 06:35:07 env -- env/env.sh@26 -- # uname 00:08:49.315 06:35:07 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:49.315 06:35:07 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:49.315 06:35:07 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:49.315 06:35:07 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.315 06:35:07 env -- common/autotest_common.sh@10 -- # set +x 00:08:49.315 ************************************ 00:08:49.315 START TEST env_mem_callbacks 00:08:49.315 ************************************ 00:08:49.315 06:35:07 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:49.315 EAL: Detected CPU lcores: 10 00:08:49.315 EAL: Detected NUMA nodes: 1 00:08:49.315 EAL: Detected shared linkage of DPDK 00:08:49.315 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:49.315 EAL: Selected IOVA mode 'PA' 00:08:49.574 00:08:49.574 00:08:49.574 CUnit - A unit testing framework for C - Version 2.1-3 00:08:49.574 http://cunit.sourceforge.net/ 00:08:49.574 00:08:49.574 00:08:49.574 Suite: memory 00:08:49.574 Test: test ... 00:08:49.574 register 0x200000200000 2097152 00:08:49.574 malloc 3145728 00:08:49.574 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:49.574 register 0x200000400000 4194304 00:08:49.574 buf 0x2000004fffc0 len 3145728 PASSED 00:08:49.574 malloc 64 00:08:49.574 buf 0x2000004ffec0 len 64 PASSED 00:08:49.574 malloc 4194304 00:08:49.574 register 0x200000800000 6291456 00:08:49.574 buf 0x2000009fffc0 len 4194304 PASSED 00:08:49.574 free 0x2000004fffc0 3145728 00:08:49.574 free 0x2000004ffec0 64 00:08:49.574 unregister 0x200000400000 4194304 PASSED 00:08:49.574 free 0x2000009fffc0 4194304 00:08:49.574 unregister 0x200000800000 6291456 PASSED 00:08:49.574 malloc 8388608 00:08:49.574 register 0x200000400000 10485760 00:08:49.574 buf 0x2000005fffc0 len 8388608 PASSED 00:08:49.574 free 0x2000005fffc0 8388608 00:08:49.574 unregister 0x200000400000 10485760 PASSED 00:08:49.574 passed 00:08:49.574 00:08:49.574 Run Summary: Type Total Ran Passed Failed Inactive 00:08:49.574 suites 1 1 n/a 0 0 00:08:49.574 tests 1 1 1 0 0 00:08:49.574 asserts 15 15 15 0 n/a 00:08:49.574 00:08:49.574 Elapsed time = 0.074 seconds 00:08:49.574 00:08:49.574 real 0m0.295s 00:08:49.574 user 0m0.112s 00:08:49.574 sys 0m0.078s 00:08:49.574 06:35:08 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.574 ************************************ 00:08:49.574 END TEST env_mem_callbacks 00:08:49.574 ************************************ 00:08:49.574 06:35:08 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:49.574 ************************************ 00:08:49.574 END TEST env 00:08:49.574 ************************************ 00:08:49.574 00:08:49.574 real 0m10.082s 00:08:49.574 user 0m8.087s 00:08:49.574 sys 0m1.596s 00:08:49.574 06:35:08 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.574 06:35:08 env -- common/autotest_common.sh@10 -- # set +x 00:08:49.574 06:35:08 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:49.574 06:35:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:49.574 06:35:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.574 06:35:08 -- common/autotest_common.sh@10 -- # set +x 00:08:49.574 ************************************ 00:08:49.574 START TEST rpc 00:08:49.574 ************************************ 00:08:49.574 06:35:08 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:49.833 * Looking for test storage... 00:08:49.833 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:49.833 06:35:08 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:49.833 06:35:08 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:49.833 06:35:08 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:49.833 06:35:08 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:49.833 06:35:08 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.833 06:35:08 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.833 06:35:08 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.833 06:35:08 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.833 06:35:08 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.833 06:35:08 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.833 06:35:08 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.833 06:35:08 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.833 06:35:08 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.833 06:35:08 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.833 06:35:08 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.833 06:35:08 rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:49.833 06:35:08 rpc -- scripts/common.sh@345 -- # : 1 00:08:49.833 06:35:08 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.833 06:35:08 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.833 06:35:08 rpc -- scripts/common.sh@365 -- # decimal 1 00:08:49.833 06:35:08 rpc -- scripts/common.sh@353 -- # local d=1 00:08:49.833 06:35:08 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.833 06:35:08 rpc -- scripts/common.sh@355 -- # echo 1 00:08:49.833 06:35:08 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.833 06:35:08 rpc -- scripts/common.sh@366 -- # decimal 2 00:08:49.833 06:35:08 rpc -- scripts/common.sh@353 -- # local d=2 00:08:49.833 06:35:08 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.833 06:35:08 rpc -- scripts/common.sh@355 -- # echo 2 00:08:49.833 06:35:08 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.833 06:35:08 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.833 06:35:08 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.833 06:35:08 rpc -- scripts/common.sh@368 -- # return 0 00:08:49.833 06:35:08 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.833 06:35:08 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:49.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.833 --rc genhtml_branch_coverage=1 00:08:49.833 --rc genhtml_function_coverage=1 00:08:49.833 --rc genhtml_legend=1 00:08:49.833 --rc geninfo_all_blocks=1 00:08:49.833 --rc geninfo_unexecuted_blocks=1 00:08:49.833 00:08:49.833 ' 00:08:49.833 06:35:08 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:49.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.833 --rc genhtml_branch_coverage=1 00:08:49.833 --rc genhtml_function_coverage=1 00:08:49.833 --rc genhtml_legend=1 00:08:49.833 --rc geninfo_all_blocks=1 00:08:49.833 --rc geninfo_unexecuted_blocks=1 00:08:49.833 00:08:49.833 ' 00:08:49.833 06:35:08 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:49.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.833 --rc genhtml_branch_coverage=1 00:08:49.833 --rc genhtml_function_coverage=1 00:08:49.833 --rc genhtml_legend=1 00:08:49.833 --rc geninfo_all_blocks=1 00:08:49.833 --rc geninfo_unexecuted_blocks=1 00:08:49.833 00:08:49.833 ' 00:08:49.833 06:35:08 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:49.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.833 --rc genhtml_branch_coverage=1 00:08:49.833 --rc genhtml_function_coverage=1 00:08:49.833 --rc genhtml_legend=1 00:08:49.833 --rc geninfo_all_blocks=1 00:08:49.833 --rc geninfo_unexecuted_blocks=1 00:08:49.833 00:08:49.833 ' 00:08:49.833 06:35:08 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56975 00:08:49.833 06:35:08 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:49.833 06:35:08 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:08:49.833 06:35:08 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56975 00:08:49.833 06:35:08 rpc -- common/autotest_common.sh@835 -- # '[' -z 56975 ']' 00:08:49.833 06:35:08 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.833 06:35:08 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.833 06:35:08 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.833 06:35:08 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.833 06:35:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.092 [2024-12-06 06:35:08.549415] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:08:50.092 [2024-12-06 06:35:08.549903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56975 ] 00:08:50.350 [2024-12-06 06:35:08.742215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.350 [2024-12-06 06:35:08.887114] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:50.350 [2024-12-06 06:35:08.887436] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56975' to capture a snapshot of events at runtime. 00:08:50.350 [2024-12-06 06:35:08.887633] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.350 [2024-12-06 06:35:08.887847] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.350 [2024-12-06 06:35:08.887893] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56975 for offline analysis/debug. 00:08:50.350 [2024-12-06 06:35:08.889426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.287 06:35:09 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:51.287 06:35:09 rpc -- common/autotest_common.sh@868 -- # return 0 00:08:51.287 06:35:09 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:51.287 06:35:09 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:51.287 06:35:09 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:51.287 06:35:09 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:51.287 06:35:09 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.287 06:35:09 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.287 06:35:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:51.287 ************************************ 00:08:51.287 START TEST rpc_integrity 00:08:51.287 ************************************ 00:08:51.287 06:35:09 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:51.287 06:35:09 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:51.287 06:35:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.287 06:35:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:51.287 06:35:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.287 06:35:09 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:51.287 06:35:09 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:51.287 06:35:09 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:51.287 06:35:09 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:51.288 06:35:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.288 06:35:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:51.546 06:35:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.546 06:35:09 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:51.546 06:35:09 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:51.546 06:35:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.546 06:35:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:51.547 06:35:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.547 06:35:09 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:51.547 { 00:08:51.547 "name": "Malloc0", 00:08:51.547 "aliases": [ 00:08:51.547 "788f2d34-af24-462b-a53f-84835060775b" 00:08:51.547 ], 00:08:51.547 "product_name": "Malloc disk", 00:08:51.547 "block_size": 512, 00:08:51.547 "num_blocks": 16384, 00:08:51.547 "uuid": "788f2d34-af24-462b-a53f-84835060775b", 00:08:51.547 "assigned_rate_limits": { 00:08:51.547 "rw_ios_per_sec": 0, 00:08:51.547 "rw_mbytes_per_sec": 0, 00:08:51.547 "r_mbytes_per_sec": 0, 00:08:51.547 "w_mbytes_per_sec": 0 00:08:51.547 }, 00:08:51.547 "claimed": false, 00:08:51.547 "zoned": false, 00:08:51.547 "supported_io_types": { 00:08:51.547 "read": true, 00:08:51.547 "write": true, 00:08:51.547 "unmap": true, 00:08:51.547 "flush": true, 00:08:51.547 "reset": true, 00:08:51.547 "nvme_admin": false, 00:08:51.547 "nvme_io": false, 00:08:51.547 "nvme_io_md": false, 00:08:51.547 "write_zeroes": true, 00:08:51.547 "zcopy": true, 00:08:51.547 "get_zone_info": false, 00:08:51.547 "zone_management": false, 00:08:51.547 "zone_append": false, 00:08:51.547 "compare": false, 00:08:51.547 "compare_and_write": false, 00:08:51.547 "abort": true, 00:08:51.547 "seek_hole": false, 00:08:51.547 "seek_data": false, 00:08:51.547 "copy": true, 00:08:51.547 "nvme_iov_md": false 00:08:51.547 }, 00:08:51.547 "memory_domains": [ 00:08:51.547 { 00:08:51.547 "dma_device_id": "system", 00:08:51.547 "dma_device_type": 1 00:08:51.547 }, 00:08:51.547 { 00:08:51.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.547 "dma_device_type": 2 00:08:51.547 } 00:08:51.547 ], 00:08:51.547 "driver_specific": {} 00:08:51.547 } 00:08:51.547 ]' 00:08:51.547 06:35:09 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:51.547 06:35:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:51.547 06:35:10 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:51.547 06:35:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.547 06:35:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:51.547 [2024-12-06 06:35:10.015641] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:51.547 [2024-12-06 06:35:10.015710] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.547 [2024-12-06 06:35:10.015751] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:51.547 [2024-12-06 06:35:10.015773] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.547 [2024-12-06 06:35:10.018967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.547 [2024-12-06 06:35:10.019022] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:51.547 Passthru0 00:08:51.547 06:35:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.547 06:35:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:51.547 06:35:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.547 06:35:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:51.547 06:35:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.547 06:35:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:51.547 { 00:08:51.547 "name": "Malloc0", 00:08:51.547 "aliases": [ 00:08:51.547 "788f2d34-af24-462b-a53f-84835060775b" 00:08:51.547 ], 00:08:51.547 "product_name": "Malloc disk", 00:08:51.547 "block_size": 512, 00:08:51.547 "num_blocks": 16384, 00:08:51.547 "uuid": "788f2d34-af24-462b-a53f-84835060775b", 00:08:51.547 "assigned_rate_limits": { 00:08:51.547 "rw_ios_per_sec": 0, 00:08:51.547 "rw_mbytes_per_sec": 0, 00:08:51.547 "r_mbytes_per_sec": 0, 00:08:51.547 "w_mbytes_per_sec": 0 00:08:51.547 }, 00:08:51.547 "claimed": true, 00:08:51.547 "claim_type": "exclusive_write", 00:08:51.547 "zoned": false, 00:08:51.547 "supported_io_types": { 00:08:51.547 "read": true, 00:08:51.547 "write": true, 00:08:51.547 "unmap": true, 00:08:51.547 "flush": true, 00:08:51.547 "reset": true, 00:08:51.547 "nvme_admin": false, 00:08:51.547 "nvme_io": false, 00:08:51.547 "nvme_io_md": false, 00:08:51.547 "write_zeroes": true, 00:08:51.547 "zcopy": true, 00:08:51.547 "get_zone_info": false, 00:08:51.547 "zone_management": false, 00:08:51.547 "zone_append": false, 00:08:51.547 "compare": false, 00:08:51.547 "compare_and_write": false, 00:08:51.547 "abort": true, 00:08:51.547 "seek_hole": false, 00:08:51.547 "seek_data": false, 00:08:51.547 "copy": true, 00:08:51.547 "nvme_iov_md": false 00:08:51.547 }, 00:08:51.547 "memory_domains": [ 00:08:51.547 { 00:08:51.547 "dma_device_id": "system", 00:08:51.547 "dma_device_type": 1 00:08:51.547 }, 00:08:51.547 { 00:08:51.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.547 "dma_device_type": 2 00:08:51.547 } 00:08:51.547 ], 00:08:51.547 "driver_specific": {} 00:08:51.547 }, 00:08:51.547 { 00:08:51.547 "name": "Passthru0", 00:08:51.547 "aliases": [ 00:08:51.547 "fef056f2-f87e-5c2f-9d38-dcfa4a405cf7" 00:08:51.547 ], 00:08:51.547 "product_name": "passthru", 00:08:51.547 "block_size": 512, 00:08:51.547 "num_blocks": 16384, 00:08:51.547 "uuid": "fef056f2-f87e-5c2f-9d38-dcfa4a405cf7", 00:08:51.547 "assigned_rate_limits": { 00:08:51.547 "rw_ios_per_sec": 0, 00:08:51.547 "rw_mbytes_per_sec": 0, 00:08:51.547 "r_mbytes_per_sec": 0, 00:08:51.547 "w_mbytes_per_sec": 0 00:08:51.547 }, 00:08:51.547 "claimed": false, 00:08:51.547 "zoned": false, 00:08:51.547 "supported_io_types": { 00:08:51.547 "read": true, 00:08:51.547 "write": true, 00:08:51.547 "unmap": true, 00:08:51.547 "flush": true, 00:08:51.547 "reset": true, 00:08:51.547 "nvme_admin": false, 00:08:51.547 "nvme_io": false, 00:08:51.547 "nvme_io_md": false, 00:08:51.547 "write_zeroes": true, 00:08:51.547 "zcopy": true, 00:08:51.547 "get_zone_info": false, 00:08:51.547 "zone_management": false, 00:08:51.547 "zone_append": false, 00:08:51.547 "compare": false, 00:08:51.547 "compare_and_write": false, 00:08:51.547 "abort": true, 00:08:51.547 "seek_hole": false, 00:08:51.547 "seek_data": false, 00:08:51.547 "copy": true, 00:08:51.547 "nvme_iov_md": false 00:08:51.547 }, 00:08:51.547 "memory_domains": [ 00:08:51.547 { 00:08:51.547 "dma_device_id": "system", 00:08:51.547 "dma_device_type": 1 00:08:51.547 }, 00:08:51.547 { 00:08:51.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.547 "dma_device_type": 2 00:08:51.547 } 00:08:51.547 ], 00:08:51.547 "driver_specific": { 00:08:51.547 "passthru": { 00:08:51.547 "name": "Passthru0", 00:08:51.547 "base_bdev_name": "Malloc0" 00:08:51.547 } 00:08:51.547 } 00:08:51.547 } 00:08:51.547 ]' 00:08:51.547 06:35:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:51.547 06:35:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:51.547 06:35:10 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:51.547 06:35:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.547 06:35:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:51.547 06:35:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.547 06:35:10 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:51.547 06:35:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.547 06:35:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:51.547 06:35:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.547 06:35:10 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:51.547 06:35:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.547 06:35:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:51.547 06:35:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.547 06:35:10 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:51.547 06:35:10 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:51.806 ************************************ 00:08:51.806 END TEST rpc_integrity 00:08:51.806 ************************************ 00:08:51.806 06:35:10 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:51.806 00:08:51.806 real 0m0.367s 00:08:51.806 user 0m0.223s 00:08:51.806 sys 0m0.045s 00:08:51.806 06:35:10 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.806 06:35:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:51.806 06:35:10 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:51.806 06:35:10 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.806 06:35:10 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.807 06:35:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:51.807 ************************************ 00:08:51.807 START TEST rpc_plugins 00:08:51.807 ************************************ 00:08:51.807 06:35:10 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:08:51.807 06:35:10 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:51.807 06:35:10 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.807 06:35:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:51.807 06:35:10 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.807 06:35:10 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:51.807 06:35:10 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:51.807 06:35:10 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.807 06:35:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:51.807 06:35:10 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.807 06:35:10 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:51.807 { 00:08:51.807 "name": "Malloc1", 00:08:51.807 "aliases": [ 00:08:51.807 "715329e6-7534-453a-aecb-d147158a690d" 00:08:51.807 ], 00:08:51.807 "product_name": "Malloc disk", 00:08:51.807 "block_size": 4096, 00:08:51.807 "num_blocks": 256, 00:08:51.807 "uuid": "715329e6-7534-453a-aecb-d147158a690d", 00:08:51.807 "assigned_rate_limits": { 00:08:51.807 "rw_ios_per_sec": 0, 00:08:51.807 "rw_mbytes_per_sec": 0, 00:08:51.807 "r_mbytes_per_sec": 0, 00:08:51.807 "w_mbytes_per_sec": 0 00:08:51.807 }, 00:08:51.807 "claimed": false, 00:08:51.807 "zoned": false, 00:08:51.807 "supported_io_types": { 00:08:51.807 "read": true, 00:08:51.807 "write": true, 00:08:51.807 "unmap": true, 00:08:51.807 "flush": true, 00:08:51.807 "reset": true, 00:08:51.807 "nvme_admin": false, 00:08:51.807 "nvme_io": false, 00:08:51.807 "nvme_io_md": false, 00:08:51.807 "write_zeroes": true, 00:08:51.807 "zcopy": true, 00:08:51.807 "get_zone_info": false, 00:08:51.807 "zone_management": false, 00:08:51.807 "zone_append": false, 00:08:51.807 "compare": false, 00:08:51.807 "compare_and_write": false, 00:08:51.807 "abort": true, 00:08:51.807 "seek_hole": false, 00:08:51.807 "seek_data": false, 00:08:51.807 "copy": true, 00:08:51.807 "nvme_iov_md": false 00:08:51.807 }, 00:08:51.807 "memory_domains": [ 00:08:51.807 { 00:08:51.807 "dma_device_id": "system", 00:08:51.807 "dma_device_type": 1 00:08:51.807 }, 00:08:51.807 { 00:08:51.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.807 "dma_device_type": 2 00:08:51.807 } 00:08:51.807 ], 00:08:51.807 "driver_specific": {} 00:08:51.807 } 00:08:51.807 ]' 00:08:51.807 06:35:10 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:51.807 06:35:10 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:51.807 06:35:10 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:51.807 06:35:10 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.807 06:35:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:51.807 06:35:10 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.807 06:35:10 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:51.807 06:35:10 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.807 06:35:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:51.807 06:35:10 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.807 06:35:10 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:51.807 06:35:10 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:51.807 ************************************ 00:08:51.807 END TEST rpc_plugins 00:08:51.807 ************************************ 00:08:51.807 06:35:10 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:51.807 00:08:51.807 real 0m0.172s 00:08:51.807 user 0m0.113s 00:08:51.807 sys 0m0.017s 00:08:51.807 06:35:10 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.807 06:35:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:52.067 06:35:10 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:52.067 06:35:10 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:52.067 06:35:10 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.067 06:35:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.067 ************************************ 00:08:52.067 START TEST rpc_trace_cmd_test 00:08:52.067 ************************************ 00:08:52.067 06:35:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:08:52.067 06:35:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:52.067 06:35:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:52.067 06:35:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.067 06:35:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.067 06:35:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.067 06:35:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:52.067 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56975", 00:08:52.067 "tpoint_group_mask": "0x8", 00:08:52.067 "iscsi_conn": { 00:08:52.067 "mask": "0x2", 00:08:52.067 "tpoint_mask": "0x0" 00:08:52.067 }, 00:08:52.067 "scsi": { 00:08:52.067 "mask": "0x4", 00:08:52.067 "tpoint_mask": "0x0" 00:08:52.067 }, 00:08:52.067 "bdev": { 00:08:52.067 "mask": "0x8", 00:08:52.067 "tpoint_mask": "0xffffffffffffffff" 00:08:52.067 }, 00:08:52.067 "nvmf_rdma": { 00:08:52.067 "mask": "0x10", 00:08:52.067 "tpoint_mask": "0x0" 00:08:52.067 }, 00:08:52.067 "nvmf_tcp": { 00:08:52.067 "mask": "0x20", 00:08:52.067 "tpoint_mask": "0x0" 00:08:52.067 }, 00:08:52.067 "ftl": { 00:08:52.067 "mask": "0x40", 00:08:52.067 "tpoint_mask": "0x0" 00:08:52.067 }, 00:08:52.067 "blobfs": { 00:08:52.067 "mask": "0x80", 00:08:52.067 "tpoint_mask": "0x0" 00:08:52.067 }, 00:08:52.067 "dsa": { 00:08:52.067 "mask": "0x200", 00:08:52.067 "tpoint_mask": "0x0" 00:08:52.067 }, 00:08:52.067 "thread": { 00:08:52.067 "mask": "0x400", 00:08:52.067 "tpoint_mask": "0x0" 00:08:52.067 }, 00:08:52.067 "nvme_pcie": { 00:08:52.067 "mask": "0x800", 00:08:52.067 "tpoint_mask": "0x0" 00:08:52.067 }, 00:08:52.067 "iaa": { 00:08:52.067 "mask": "0x1000", 00:08:52.067 "tpoint_mask": "0x0" 00:08:52.067 }, 00:08:52.067 "nvme_tcp": { 00:08:52.067 "mask": "0x2000", 00:08:52.067 "tpoint_mask": "0x0" 00:08:52.067 }, 00:08:52.067 "bdev_nvme": { 00:08:52.067 "mask": "0x4000", 00:08:52.067 "tpoint_mask": "0x0" 00:08:52.067 }, 00:08:52.067 "sock": { 00:08:52.067 "mask": "0x8000", 00:08:52.067 "tpoint_mask": "0x0" 00:08:52.067 }, 00:08:52.067 "blob": { 00:08:52.067 "mask": "0x10000", 00:08:52.067 "tpoint_mask": "0x0" 00:08:52.067 }, 00:08:52.067 "bdev_raid": { 00:08:52.067 "mask": "0x20000", 00:08:52.067 "tpoint_mask": "0x0" 00:08:52.067 }, 00:08:52.067 "scheduler": { 00:08:52.067 "mask": "0x40000", 00:08:52.067 "tpoint_mask": "0x0" 00:08:52.067 } 00:08:52.067 }' 00:08:52.067 06:35:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:52.067 06:35:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:08:52.067 06:35:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:52.067 06:35:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:52.067 06:35:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:52.067 06:35:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:52.067 06:35:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:52.327 06:35:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:52.327 06:35:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:52.327 ************************************ 00:08:52.327 END TEST rpc_trace_cmd_test 00:08:52.327 ************************************ 00:08:52.327 06:35:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:52.327 00:08:52.327 real 0m0.276s 00:08:52.327 user 0m0.236s 00:08:52.327 sys 0m0.032s 00:08:52.327 06:35:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.327 06:35:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.327 06:35:10 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:52.327 06:35:10 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:52.327 06:35:10 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:52.327 06:35:10 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:52.327 06:35:10 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.327 06:35:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.327 ************************************ 00:08:52.327 START TEST rpc_daemon_integrity 00:08:52.327 ************************************ 00:08:52.327 06:35:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:52.327 06:35:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:52.327 06:35:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.327 06:35:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:52.327 06:35:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.327 06:35:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:52.327 06:35:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:52.327 06:35:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:52.327 06:35:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:52.327 06:35:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.327 06:35:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:52.327 06:35:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.327 06:35:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:52.327 06:35:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:52.327 06:35:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.327 06:35:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:52.327 06:35:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.327 06:35:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:52.327 { 00:08:52.327 "name": "Malloc2", 00:08:52.327 "aliases": [ 00:08:52.327 "8e6bf6e9-4f61-4fa9-acdf-b67a61139741" 00:08:52.327 ], 00:08:52.327 "product_name": "Malloc disk", 00:08:52.327 "block_size": 512, 00:08:52.327 "num_blocks": 16384, 00:08:52.327 "uuid": "8e6bf6e9-4f61-4fa9-acdf-b67a61139741", 00:08:52.327 "assigned_rate_limits": { 00:08:52.327 "rw_ios_per_sec": 0, 00:08:52.327 "rw_mbytes_per_sec": 0, 00:08:52.327 "r_mbytes_per_sec": 0, 00:08:52.327 "w_mbytes_per_sec": 0 00:08:52.327 }, 00:08:52.327 "claimed": false, 00:08:52.327 "zoned": false, 00:08:52.327 "supported_io_types": { 00:08:52.327 "read": true, 00:08:52.327 "write": true, 00:08:52.327 "unmap": true, 00:08:52.327 "flush": true, 00:08:52.328 "reset": true, 00:08:52.328 "nvme_admin": false, 00:08:52.328 "nvme_io": false, 00:08:52.328 "nvme_io_md": false, 00:08:52.328 "write_zeroes": true, 00:08:52.328 "zcopy": true, 00:08:52.328 "get_zone_info": false, 00:08:52.328 "zone_management": false, 00:08:52.328 "zone_append": false, 00:08:52.328 "compare": false, 00:08:52.328 "compare_and_write": false, 00:08:52.328 "abort": true, 00:08:52.328 "seek_hole": false, 00:08:52.328 "seek_data": false, 00:08:52.328 "copy": true, 00:08:52.328 "nvme_iov_md": false 00:08:52.328 }, 00:08:52.328 "memory_domains": [ 00:08:52.328 { 00:08:52.328 "dma_device_id": "system", 00:08:52.328 "dma_device_type": 1 00:08:52.328 }, 00:08:52.328 { 00:08:52.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.328 "dma_device_type": 2 00:08:52.328 } 00:08:52.328 ], 00:08:52.328 "driver_specific": {} 00:08:52.328 } 00:08:52.328 ]' 00:08:52.328 06:35:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:52.586 06:35:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:52.586 06:35:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:52.586 06:35:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.586 06:35:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:52.586 [2024-12-06 06:35:10.989104] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:52.586 [2024-12-06 06:35:10.989175] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.586 [2024-12-06 06:35:10.989205] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:52.586 [2024-12-06 06:35:10.989223] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.586 [2024-12-06 06:35:10.992335] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.586 [2024-12-06 06:35:10.992387] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:52.586 Passthru0 00:08:52.586 06:35:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.586 06:35:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:52.586 06:35:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.586 06:35:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:52.586 06:35:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.586 06:35:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:52.586 { 00:08:52.586 "name": "Malloc2", 00:08:52.586 "aliases": [ 00:08:52.586 "8e6bf6e9-4f61-4fa9-acdf-b67a61139741" 00:08:52.586 ], 00:08:52.586 "product_name": "Malloc disk", 00:08:52.586 "block_size": 512, 00:08:52.586 "num_blocks": 16384, 00:08:52.586 "uuid": "8e6bf6e9-4f61-4fa9-acdf-b67a61139741", 00:08:52.586 "assigned_rate_limits": { 00:08:52.586 "rw_ios_per_sec": 0, 00:08:52.586 "rw_mbytes_per_sec": 0, 00:08:52.586 "r_mbytes_per_sec": 0, 00:08:52.586 "w_mbytes_per_sec": 0 00:08:52.586 }, 00:08:52.586 "claimed": true, 00:08:52.586 "claim_type": "exclusive_write", 00:08:52.586 "zoned": false, 00:08:52.586 "supported_io_types": { 00:08:52.586 "read": true, 00:08:52.586 "write": true, 00:08:52.586 "unmap": true, 00:08:52.586 "flush": true, 00:08:52.586 "reset": true, 00:08:52.586 "nvme_admin": false, 00:08:52.586 "nvme_io": false, 00:08:52.586 "nvme_io_md": false, 00:08:52.586 "write_zeroes": true, 00:08:52.586 "zcopy": true, 00:08:52.586 "get_zone_info": false, 00:08:52.586 "zone_management": false, 00:08:52.586 "zone_append": false, 00:08:52.586 "compare": false, 00:08:52.586 "compare_and_write": false, 00:08:52.586 "abort": true, 00:08:52.586 "seek_hole": false, 00:08:52.586 "seek_data": false, 00:08:52.586 "copy": true, 00:08:52.586 "nvme_iov_md": false 00:08:52.586 }, 00:08:52.586 "memory_domains": [ 00:08:52.586 { 00:08:52.586 "dma_device_id": "system", 00:08:52.586 "dma_device_type": 1 00:08:52.586 }, 00:08:52.586 { 00:08:52.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.586 "dma_device_type": 2 00:08:52.586 } 00:08:52.586 ], 00:08:52.587 "driver_specific": {} 00:08:52.587 }, 00:08:52.587 { 00:08:52.587 "name": "Passthru0", 00:08:52.587 "aliases": [ 00:08:52.587 "db8b204b-8dde-5649-9f59-8efe9a6c649c" 00:08:52.587 ], 00:08:52.587 "product_name": "passthru", 00:08:52.587 "block_size": 512, 00:08:52.587 "num_blocks": 16384, 00:08:52.587 "uuid": "db8b204b-8dde-5649-9f59-8efe9a6c649c", 00:08:52.587 "assigned_rate_limits": { 00:08:52.587 "rw_ios_per_sec": 0, 00:08:52.587 "rw_mbytes_per_sec": 0, 00:08:52.587 "r_mbytes_per_sec": 0, 00:08:52.587 "w_mbytes_per_sec": 0 00:08:52.587 }, 00:08:52.587 "claimed": false, 00:08:52.587 "zoned": false, 00:08:52.587 "supported_io_types": { 00:08:52.587 "read": true, 00:08:52.587 "write": true, 00:08:52.587 "unmap": true, 00:08:52.587 "flush": true, 00:08:52.587 "reset": true, 00:08:52.587 "nvme_admin": false, 00:08:52.587 "nvme_io": false, 00:08:52.587 "nvme_io_md": false, 00:08:52.587 "write_zeroes": true, 00:08:52.587 "zcopy": true, 00:08:52.587 "get_zone_info": false, 00:08:52.587 "zone_management": false, 00:08:52.587 "zone_append": false, 00:08:52.587 "compare": false, 00:08:52.587 "compare_and_write": false, 00:08:52.587 "abort": true, 00:08:52.587 "seek_hole": false, 00:08:52.587 "seek_data": false, 00:08:52.587 "copy": true, 00:08:52.587 "nvme_iov_md": false 00:08:52.587 }, 00:08:52.587 "memory_domains": [ 00:08:52.587 { 00:08:52.587 "dma_device_id": "system", 00:08:52.587 "dma_device_type": 1 00:08:52.587 }, 00:08:52.587 { 00:08:52.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.587 "dma_device_type": 2 00:08:52.587 } 00:08:52.587 ], 00:08:52.587 "driver_specific": { 00:08:52.587 "passthru": { 00:08:52.587 "name": "Passthru0", 00:08:52.587 "base_bdev_name": "Malloc2" 00:08:52.587 } 00:08:52.587 } 00:08:52.587 } 00:08:52.587 ]' 00:08:52.587 06:35:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:52.587 06:35:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:52.587 06:35:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:52.587 06:35:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.587 06:35:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:52.587 06:35:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.587 06:35:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:52.587 06:35:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.587 06:35:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:52.587 06:35:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.587 06:35:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:52.587 06:35:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.587 06:35:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:52.587 06:35:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.587 06:35:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:52.587 06:35:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:52.587 ************************************ 00:08:52.587 END TEST rpc_daemon_integrity 00:08:52.587 ************************************ 00:08:52.587 06:35:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:52.587 00:08:52.587 real 0m0.367s 00:08:52.587 user 0m0.231s 00:08:52.587 sys 0m0.037s 00:08:52.587 06:35:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.587 06:35:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:52.587 06:35:11 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:52.587 06:35:11 rpc -- rpc/rpc.sh@84 -- # killprocess 56975 00:08:52.587 06:35:11 rpc -- common/autotest_common.sh@954 -- # '[' -z 56975 ']' 00:08:52.587 06:35:11 rpc -- common/autotest_common.sh@958 -- # kill -0 56975 00:08:52.845 06:35:11 rpc -- common/autotest_common.sh@959 -- # uname 00:08:52.845 06:35:11 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:52.845 06:35:11 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56975 00:08:52.845 killing process with pid 56975 00:08:52.845 06:35:11 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:52.845 06:35:11 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:52.845 06:35:11 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56975' 00:08:52.845 06:35:11 rpc -- common/autotest_common.sh@973 -- # kill 56975 00:08:52.845 06:35:11 rpc -- common/autotest_common.sh@978 -- # wait 56975 00:08:55.424 00:08:55.424 real 0m5.498s 00:08:55.424 user 0m6.198s 00:08:55.424 sys 0m0.995s 00:08:55.424 06:35:13 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.424 ************************************ 00:08:55.424 END TEST rpc 00:08:55.424 ************************************ 00:08:55.424 06:35:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.424 06:35:13 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:55.424 06:35:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:55.424 06:35:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.424 06:35:13 -- common/autotest_common.sh@10 -- # set +x 00:08:55.424 ************************************ 00:08:55.424 START TEST skip_rpc 00:08:55.424 ************************************ 00:08:55.424 06:35:13 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:55.424 * Looking for test storage... 00:08:55.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:55.424 06:35:13 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:55.424 06:35:13 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:55.424 06:35:13 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:55.424 06:35:13 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:55.424 06:35:13 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.424 06:35:13 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.424 06:35:13 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.424 06:35:13 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.424 06:35:13 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.424 06:35:13 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.424 06:35:13 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.424 06:35:13 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.424 06:35:13 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.424 06:35:13 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.424 06:35:13 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.424 06:35:13 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:55.424 06:35:13 skip_rpc -- scripts/common.sh@345 -- # : 1 00:08:55.424 06:35:13 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.424 06:35:13 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.424 06:35:13 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:55.424 06:35:13 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:08:55.424 06:35:13 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.424 06:35:13 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:08:55.424 06:35:13 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.424 06:35:13 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:55.424 06:35:13 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:08:55.424 06:35:13 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.424 06:35:13 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:08:55.424 06:35:13 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.424 06:35:13 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.424 06:35:13 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.424 06:35:13 skip_rpc -- scripts/common.sh@368 -- # return 0 00:08:55.424 06:35:13 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.424 06:35:13 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:55.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.424 --rc genhtml_branch_coverage=1 00:08:55.424 --rc genhtml_function_coverage=1 00:08:55.424 --rc genhtml_legend=1 00:08:55.424 --rc geninfo_all_blocks=1 00:08:55.424 --rc geninfo_unexecuted_blocks=1 00:08:55.424 00:08:55.424 ' 00:08:55.424 06:35:13 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:55.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.424 --rc genhtml_branch_coverage=1 00:08:55.424 --rc genhtml_function_coverage=1 00:08:55.424 --rc genhtml_legend=1 00:08:55.424 --rc geninfo_all_blocks=1 00:08:55.424 --rc geninfo_unexecuted_blocks=1 00:08:55.424 00:08:55.424 ' 00:08:55.424 06:35:13 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:55.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.424 --rc genhtml_branch_coverage=1 00:08:55.424 --rc genhtml_function_coverage=1 00:08:55.424 --rc genhtml_legend=1 00:08:55.424 --rc geninfo_all_blocks=1 00:08:55.424 --rc geninfo_unexecuted_blocks=1 00:08:55.424 00:08:55.424 ' 00:08:55.424 06:35:13 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:55.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.424 --rc genhtml_branch_coverage=1 00:08:55.424 --rc genhtml_function_coverage=1 00:08:55.424 --rc genhtml_legend=1 00:08:55.424 --rc geninfo_all_blocks=1 00:08:55.424 --rc geninfo_unexecuted_blocks=1 00:08:55.424 00:08:55.424 ' 00:08:55.424 06:35:13 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:55.424 06:35:13 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:55.424 06:35:13 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:55.424 06:35:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:55.424 06:35:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.424 06:35:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.424 ************************************ 00:08:55.424 START TEST skip_rpc 00:08:55.424 ************************************ 00:08:55.424 06:35:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:08:55.424 06:35:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57209 00:08:55.424 06:35:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:55.424 06:35:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:55.424 06:35:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:55.682 [2024-12-06 06:35:14.104348] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:08:55.682 [2024-12-06 06:35:14.104577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57209 ] 00:08:55.682 [2024-12-06 06:35:14.302434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.940 [2024-12-06 06:35:14.466129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.212 06:35:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:01.212 06:35:18 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:01.212 06:35:18 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:01.212 06:35:18 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:01.212 06:35:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:01.212 06:35:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:01.212 06:35:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:01.212 06:35:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:09:01.212 06:35:18 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.212 06:35:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.212 06:35:18 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:01.212 06:35:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:01.212 06:35:18 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:01.212 06:35:18 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:01.212 06:35:18 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:01.212 06:35:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:01.212 06:35:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57209 00:09:01.212 06:35:18 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57209 ']' 00:09:01.212 06:35:18 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57209 00:09:01.212 06:35:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:09:01.212 06:35:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.212 06:35:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57209 00:09:01.212 06:35:19 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.212 killing process with pid 57209 00:09:01.212 06:35:19 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.212 06:35:19 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57209' 00:09:01.212 06:35:19 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57209 00:09:01.212 06:35:19 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57209 00:09:03.116 00:09:03.116 real 0m7.465s 00:09:03.116 user 0m6.843s 00:09:03.116 sys 0m0.506s 00:09:03.116 06:35:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.116 06:35:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.116 ************************************ 00:09:03.116 END TEST skip_rpc 00:09:03.116 ************************************ 00:09:03.116 06:35:21 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:03.116 06:35:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:03.116 06:35:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.116 06:35:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.116 ************************************ 00:09:03.116 START TEST skip_rpc_with_json 00:09:03.116 ************************************ 00:09:03.116 06:35:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:09:03.116 06:35:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:03.116 06:35:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57319 00:09:03.116 06:35:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:03.116 06:35:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:03.116 06:35:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57319 00:09:03.116 06:35:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57319 ']' 00:09:03.116 06:35:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.116 06:35:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.116 06:35:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.116 06:35:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.116 06:35:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:03.116 [2024-12-06 06:35:21.625590] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:09:03.117 [2024-12-06 06:35:21.626094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57319 ] 00:09:03.376 [2024-12-06 06:35:21.816195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.376 [2024-12-06 06:35:21.954673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.313 06:35:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:04.313 06:35:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:09:04.313 06:35:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:04.313 06:35:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.313 06:35:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:04.313 [2024-12-06 06:35:22.875090] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:04.313 request: 00:09:04.313 { 00:09:04.313 "trtype": "tcp", 00:09:04.313 "method": "nvmf_get_transports", 00:09:04.313 "req_id": 1 00:09:04.313 } 00:09:04.313 Got JSON-RPC error response 00:09:04.313 response: 00:09:04.313 { 00:09:04.313 "code": -19, 00:09:04.313 "message": "No such device" 00:09:04.313 } 00:09:04.313 06:35:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:04.313 06:35:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:04.313 06:35:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.313 06:35:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:04.313 [2024-12-06 06:35:22.887259] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:04.313 06:35:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.313 06:35:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:04.313 06:35:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.313 06:35:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:04.573 06:35:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.573 06:35:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:04.573 { 00:09:04.573 "subsystems": [ 00:09:04.573 { 00:09:04.573 "subsystem": "fsdev", 00:09:04.573 "config": [ 00:09:04.573 { 00:09:04.573 "method": "fsdev_set_opts", 00:09:04.573 "params": { 00:09:04.573 "fsdev_io_pool_size": 65535, 00:09:04.573 "fsdev_io_cache_size": 256 00:09:04.573 } 00:09:04.573 } 00:09:04.573 ] 00:09:04.573 }, 00:09:04.573 { 00:09:04.573 "subsystem": "keyring", 00:09:04.573 "config": [] 00:09:04.573 }, 00:09:04.573 { 00:09:04.573 "subsystem": "iobuf", 00:09:04.573 "config": [ 00:09:04.573 { 00:09:04.573 "method": "iobuf_set_options", 00:09:04.573 "params": { 00:09:04.573 "small_pool_count": 8192, 00:09:04.573 "large_pool_count": 1024, 00:09:04.573 "small_bufsize": 8192, 00:09:04.573 "large_bufsize": 135168, 00:09:04.573 "enable_numa": false 00:09:04.573 } 00:09:04.573 } 00:09:04.573 ] 00:09:04.573 }, 00:09:04.573 { 00:09:04.573 "subsystem": "sock", 00:09:04.573 "config": [ 00:09:04.573 { 00:09:04.573 "method": "sock_set_default_impl", 00:09:04.573 "params": { 00:09:04.573 "impl_name": "posix" 00:09:04.573 } 00:09:04.573 }, 00:09:04.573 { 00:09:04.573 "method": "sock_impl_set_options", 00:09:04.573 "params": { 00:09:04.573 "impl_name": "ssl", 00:09:04.573 "recv_buf_size": 4096, 00:09:04.573 "send_buf_size": 4096, 00:09:04.573 "enable_recv_pipe": true, 00:09:04.573 "enable_quickack": false, 00:09:04.573 "enable_placement_id": 0, 00:09:04.573 "enable_zerocopy_send_server": true, 00:09:04.573 "enable_zerocopy_send_client": false, 00:09:04.573 "zerocopy_threshold": 0, 00:09:04.573 "tls_version": 0, 00:09:04.573 "enable_ktls": false 00:09:04.573 } 00:09:04.573 }, 00:09:04.573 { 00:09:04.573 "method": "sock_impl_set_options", 00:09:04.573 "params": { 00:09:04.573 "impl_name": "posix", 00:09:04.573 "recv_buf_size": 2097152, 00:09:04.573 "send_buf_size": 2097152, 00:09:04.573 "enable_recv_pipe": true, 00:09:04.573 "enable_quickack": false, 00:09:04.573 "enable_placement_id": 0, 00:09:04.573 "enable_zerocopy_send_server": true, 00:09:04.573 "enable_zerocopy_send_client": false, 00:09:04.573 "zerocopy_threshold": 0, 00:09:04.573 "tls_version": 0, 00:09:04.573 "enable_ktls": false 00:09:04.573 } 00:09:04.573 } 00:09:04.573 ] 00:09:04.573 }, 00:09:04.573 { 00:09:04.573 "subsystem": "vmd", 00:09:04.573 "config": [] 00:09:04.573 }, 00:09:04.573 { 00:09:04.573 "subsystem": "accel", 00:09:04.573 "config": [ 00:09:04.573 { 00:09:04.573 "method": "accel_set_options", 00:09:04.573 "params": { 00:09:04.573 "small_cache_size": 128, 00:09:04.573 "large_cache_size": 16, 00:09:04.573 "task_count": 2048, 00:09:04.573 "sequence_count": 2048, 00:09:04.573 "buf_count": 2048 00:09:04.573 } 00:09:04.573 } 00:09:04.573 ] 00:09:04.573 }, 00:09:04.573 { 00:09:04.573 "subsystem": "bdev", 00:09:04.573 "config": [ 00:09:04.573 { 00:09:04.573 "method": "bdev_set_options", 00:09:04.573 "params": { 00:09:04.573 "bdev_io_pool_size": 65535, 00:09:04.573 "bdev_io_cache_size": 256, 00:09:04.573 "bdev_auto_examine": true, 00:09:04.573 "iobuf_small_cache_size": 128, 00:09:04.573 "iobuf_large_cache_size": 16 00:09:04.573 } 00:09:04.573 }, 00:09:04.573 { 00:09:04.573 "method": "bdev_raid_set_options", 00:09:04.573 "params": { 00:09:04.573 "process_window_size_kb": 1024, 00:09:04.573 "process_max_bandwidth_mb_sec": 0 00:09:04.573 } 00:09:04.573 }, 00:09:04.573 { 00:09:04.573 "method": "bdev_iscsi_set_options", 00:09:04.573 "params": { 00:09:04.573 "timeout_sec": 30 00:09:04.573 } 00:09:04.573 }, 00:09:04.573 { 00:09:04.573 "method": "bdev_nvme_set_options", 00:09:04.573 "params": { 00:09:04.573 "action_on_timeout": "none", 00:09:04.573 "timeout_us": 0, 00:09:04.573 "timeout_admin_us": 0, 00:09:04.573 "keep_alive_timeout_ms": 10000, 00:09:04.573 "arbitration_burst": 0, 00:09:04.573 "low_priority_weight": 0, 00:09:04.573 "medium_priority_weight": 0, 00:09:04.573 "high_priority_weight": 0, 00:09:04.573 "nvme_adminq_poll_period_us": 10000, 00:09:04.573 "nvme_ioq_poll_period_us": 0, 00:09:04.573 "io_queue_requests": 0, 00:09:04.573 "delay_cmd_submit": true, 00:09:04.573 "transport_retry_count": 4, 00:09:04.573 "bdev_retry_count": 3, 00:09:04.573 "transport_ack_timeout": 0, 00:09:04.573 "ctrlr_loss_timeout_sec": 0, 00:09:04.573 "reconnect_delay_sec": 0, 00:09:04.573 "fast_io_fail_timeout_sec": 0, 00:09:04.573 "disable_auto_failback": false, 00:09:04.573 "generate_uuids": false, 00:09:04.573 "transport_tos": 0, 00:09:04.573 "nvme_error_stat": false, 00:09:04.573 "rdma_srq_size": 0, 00:09:04.573 "io_path_stat": false, 00:09:04.573 "allow_accel_sequence": false, 00:09:04.573 "rdma_max_cq_size": 0, 00:09:04.573 "rdma_cm_event_timeout_ms": 0, 00:09:04.573 "dhchap_digests": [ 00:09:04.573 "sha256", 00:09:04.573 "sha384", 00:09:04.573 "sha512" 00:09:04.573 ], 00:09:04.573 "dhchap_dhgroups": [ 00:09:04.573 "null", 00:09:04.573 "ffdhe2048", 00:09:04.573 "ffdhe3072", 00:09:04.573 "ffdhe4096", 00:09:04.573 "ffdhe6144", 00:09:04.573 "ffdhe8192" 00:09:04.573 ] 00:09:04.573 } 00:09:04.573 }, 00:09:04.573 { 00:09:04.573 "method": "bdev_nvme_set_hotplug", 00:09:04.573 "params": { 00:09:04.573 "period_us": 100000, 00:09:04.573 "enable": false 00:09:04.573 } 00:09:04.573 }, 00:09:04.573 { 00:09:04.573 "method": "bdev_wait_for_examine" 00:09:04.573 } 00:09:04.573 ] 00:09:04.573 }, 00:09:04.573 { 00:09:04.573 "subsystem": "scsi", 00:09:04.573 "config": null 00:09:04.573 }, 00:09:04.573 { 00:09:04.573 "subsystem": "scheduler", 00:09:04.573 "config": [ 00:09:04.573 { 00:09:04.573 "method": "framework_set_scheduler", 00:09:04.573 "params": { 00:09:04.573 "name": "static" 00:09:04.573 } 00:09:04.573 } 00:09:04.573 ] 00:09:04.573 }, 00:09:04.573 { 00:09:04.573 "subsystem": "vhost_scsi", 00:09:04.573 "config": [] 00:09:04.573 }, 00:09:04.573 { 00:09:04.573 "subsystem": "vhost_blk", 00:09:04.574 "config": [] 00:09:04.574 }, 00:09:04.574 { 00:09:04.574 "subsystem": "ublk", 00:09:04.574 "config": [] 00:09:04.574 }, 00:09:04.574 { 00:09:04.574 "subsystem": "nbd", 00:09:04.574 "config": [] 00:09:04.574 }, 00:09:04.574 { 00:09:04.574 "subsystem": "nvmf", 00:09:04.574 "config": [ 00:09:04.574 { 00:09:04.574 "method": "nvmf_set_config", 00:09:04.574 "params": { 00:09:04.574 "discovery_filter": "match_any", 00:09:04.574 "admin_cmd_passthru": { 00:09:04.574 "identify_ctrlr": false 00:09:04.574 }, 00:09:04.574 "dhchap_digests": [ 00:09:04.574 "sha256", 00:09:04.574 "sha384", 00:09:04.574 "sha512" 00:09:04.574 ], 00:09:04.574 "dhchap_dhgroups": [ 00:09:04.574 "null", 00:09:04.574 "ffdhe2048", 00:09:04.574 "ffdhe3072", 00:09:04.574 "ffdhe4096", 00:09:04.574 "ffdhe6144", 00:09:04.574 "ffdhe8192" 00:09:04.574 ] 00:09:04.574 } 00:09:04.574 }, 00:09:04.574 { 00:09:04.574 "method": "nvmf_set_max_subsystems", 00:09:04.574 "params": { 00:09:04.574 "max_subsystems": 1024 00:09:04.574 } 00:09:04.574 }, 00:09:04.574 { 00:09:04.574 "method": "nvmf_set_crdt", 00:09:04.574 "params": { 00:09:04.574 "crdt1": 0, 00:09:04.574 "crdt2": 0, 00:09:04.574 "crdt3": 0 00:09:04.574 } 00:09:04.574 }, 00:09:04.574 { 00:09:04.574 "method": "nvmf_create_transport", 00:09:04.574 "params": { 00:09:04.574 "trtype": "TCP", 00:09:04.574 "max_queue_depth": 128, 00:09:04.574 "max_io_qpairs_per_ctrlr": 127, 00:09:04.574 "in_capsule_data_size": 4096, 00:09:04.574 "max_io_size": 131072, 00:09:04.574 "io_unit_size": 131072, 00:09:04.574 "max_aq_depth": 128, 00:09:04.574 "num_shared_buffers": 511, 00:09:04.574 "buf_cache_size": 4294967295, 00:09:04.574 "dif_insert_or_strip": false, 00:09:04.574 "zcopy": false, 00:09:04.574 "c2h_success": true, 00:09:04.574 "sock_priority": 0, 00:09:04.574 "abort_timeout_sec": 1, 00:09:04.574 "ack_timeout": 0, 00:09:04.574 "data_wr_pool_size": 0 00:09:04.574 } 00:09:04.574 } 00:09:04.574 ] 00:09:04.574 }, 00:09:04.574 { 00:09:04.574 "subsystem": "iscsi", 00:09:04.574 "config": [ 00:09:04.574 { 00:09:04.574 "method": "iscsi_set_options", 00:09:04.574 "params": { 00:09:04.574 "node_base": "iqn.2016-06.io.spdk", 00:09:04.574 "max_sessions": 128, 00:09:04.574 "max_connections_per_session": 2, 00:09:04.574 "max_queue_depth": 64, 00:09:04.574 "default_time2wait": 2, 00:09:04.574 "default_time2retain": 20, 00:09:04.574 "first_burst_length": 8192, 00:09:04.574 "immediate_data": true, 00:09:04.574 "allow_duplicated_isid": false, 00:09:04.574 "error_recovery_level": 0, 00:09:04.574 "nop_timeout": 60, 00:09:04.574 "nop_in_interval": 30, 00:09:04.574 "disable_chap": false, 00:09:04.574 "require_chap": false, 00:09:04.574 "mutual_chap": false, 00:09:04.574 "chap_group": 0, 00:09:04.574 "max_large_datain_per_connection": 64, 00:09:04.574 "max_r2t_per_connection": 4, 00:09:04.574 "pdu_pool_size": 36864, 00:09:04.574 "immediate_data_pool_size": 16384, 00:09:04.574 "data_out_pool_size": 2048 00:09:04.574 } 00:09:04.574 } 00:09:04.574 ] 00:09:04.574 } 00:09:04.574 ] 00:09:04.574 } 00:09:04.574 06:35:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:04.574 06:35:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57319 00:09:04.574 06:35:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57319 ']' 00:09:04.574 06:35:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57319 00:09:04.574 06:35:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:04.574 06:35:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:04.574 06:35:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57319 00:09:04.574 06:35:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:04.574 06:35:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:04.574 killing process with pid 57319 00:09:04.574 06:35:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57319' 00:09:04.574 06:35:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57319 00:09:04.574 06:35:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57319 00:09:07.108 06:35:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57375 00:09:07.108 06:35:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:07.108 06:35:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:12.378 06:35:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57375 00:09:12.378 06:35:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57375 ']' 00:09:12.378 06:35:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57375 00:09:12.378 06:35:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:12.378 06:35:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:12.378 06:35:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57375 00:09:12.378 killing process with pid 57375 00:09:12.378 06:35:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:12.378 06:35:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:12.378 06:35:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57375' 00:09:12.378 06:35:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57375 00:09:12.378 06:35:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57375 00:09:14.280 06:35:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:14.280 06:35:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:14.280 00:09:14.280 real 0m11.386s 00:09:14.280 user 0m10.781s 00:09:14.280 sys 0m1.092s 00:09:14.280 06:35:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.280 06:35:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:14.280 ************************************ 00:09:14.280 END TEST skip_rpc_with_json 00:09:14.280 ************************************ 00:09:14.280 06:35:32 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:14.280 06:35:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:14.280 06:35:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.280 06:35:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.280 ************************************ 00:09:14.280 START TEST skip_rpc_with_delay 00:09:14.280 ************************************ 00:09:14.280 06:35:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:09:14.280 06:35:32 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:14.280 06:35:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:09:14.538 06:35:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:14.538 06:35:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:14.538 06:35:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:14.538 06:35:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:14.538 06:35:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:14.538 06:35:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:14.538 06:35:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:14.538 06:35:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:14.538 06:35:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:14.538 06:35:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:14.538 [2024-12-06 06:35:33.066062] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:14.538 06:35:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:09:14.538 06:35:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:14.538 06:35:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:14.538 06:35:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:14.538 00:09:14.538 real 0m0.210s 00:09:14.538 user 0m0.123s 00:09:14.538 sys 0m0.085s 00:09:14.538 06:35:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.538 06:35:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:14.538 ************************************ 00:09:14.538 END TEST skip_rpc_with_delay 00:09:14.538 ************************************ 00:09:14.538 06:35:33 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:14.538 06:35:33 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:14.538 06:35:33 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:14.538 06:35:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:14.538 06:35:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.538 06:35:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.798 ************************************ 00:09:14.798 START TEST exit_on_failed_rpc_init 00:09:14.798 ************************************ 00:09:14.798 06:35:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:09:14.798 06:35:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57503 00:09:14.798 06:35:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:14.798 06:35:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57503 00:09:14.798 06:35:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57503 ']' 00:09:14.798 06:35:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.798 06:35:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.798 06:35:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.798 06:35:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.798 06:35:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:14.798 [2024-12-06 06:35:33.325031] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:09:14.798 [2024-12-06 06:35:33.325223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57503 ] 00:09:15.057 [2024-12-06 06:35:33.516807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.057 [2024-12-06 06:35:33.680941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.023 06:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.023 06:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:09:16.023 06:35:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:16.023 06:35:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:16.023 06:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:09:16.023 06:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:16.023 06:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:16.023 06:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:16.023 06:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:16.023 06:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:16.023 06:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:16.023 06:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:16.023 06:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:16.023 06:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:16.023 06:35:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:16.301 [2024-12-06 06:35:34.734630] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:09:16.301 [2024-12-06 06:35:34.735389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57526 ] 00:09:16.301 [2024-12-06 06:35:34.925180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.561 [2024-12-06 06:35:35.087736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.561 [2024-12-06 06:35:35.087866] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:16.561 [2024-12-06 06:35:35.087887] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:16.561 [2024-12-06 06:35:35.087905] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:16.820 06:35:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:09:16.820 06:35:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:16.820 06:35:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:09:16.820 06:35:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:09:16.820 06:35:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:09:16.820 06:35:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:16.820 06:35:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:16.820 06:35:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57503 00:09:16.820 06:35:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57503 ']' 00:09:16.820 06:35:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57503 00:09:16.820 06:35:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:09:16.820 06:35:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.820 06:35:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57503 00:09:16.820 06:35:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.820 06:35:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.820 killing process with pid 57503 00:09:16.820 06:35:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57503' 00:09:16.820 06:35:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57503 00:09:16.820 06:35:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57503 00:09:19.353 00:09:19.353 real 0m4.528s 00:09:19.353 user 0m5.017s 00:09:19.353 sys 0m0.720s 00:09:19.353 06:35:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.353 06:35:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:19.353 ************************************ 00:09:19.354 END TEST exit_on_failed_rpc_init 00:09:19.354 ************************************ 00:09:19.354 06:35:37 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:19.354 ************************************ 00:09:19.354 END TEST skip_rpc 00:09:19.354 ************************************ 00:09:19.354 00:09:19.354 real 0m24.020s 00:09:19.354 user 0m22.964s 00:09:19.354 sys 0m2.622s 00:09:19.354 06:35:37 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.354 06:35:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.354 06:35:37 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:19.354 06:35:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:19.354 06:35:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.354 06:35:37 -- common/autotest_common.sh@10 -- # set +x 00:09:19.354 ************************************ 00:09:19.354 START TEST rpc_client 00:09:19.354 ************************************ 00:09:19.354 06:35:37 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:19.354 * Looking for test storage... 00:09:19.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:19.354 06:35:37 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:19.354 06:35:37 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:19.354 06:35:37 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:09:19.613 06:35:38 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:19.613 06:35:38 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.613 06:35:38 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.613 06:35:38 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.613 06:35:38 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.613 06:35:38 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.613 06:35:38 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.613 06:35:38 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.613 06:35:38 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.613 06:35:38 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.613 06:35:38 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.613 06:35:38 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.613 06:35:38 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:09:19.613 06:35:38 rpc_client -- scripts/common.sh@345 -- # : 1 00:09:19.613 06:35:38 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.613 06:35:38 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.613 06:35:38 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:09:19.613 06:35:38 rpc_client -- scripts/common.sh@353 -- # local d=1 00:09:19.613 06:35:38 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.613 06:35:38 rpc_client -- scripts/common.sh@355 -- # echo 1 00:09:19.613 06:35:38 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.613 06:35:38 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:09:19.613 06:35:38 rpc_client -- scripts/common.sh@353 -- # local d=2 00:09:19.613 06:35:38 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.613 06:35:38 rpc_client -- scripts/common.sh@355 -- # echo 2 00:09:19.613 06:35:38 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.613 06:35:38 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.613 06:35:38 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.613 06:35:38 rpc_client -- scripts/common.sh@368 -- # return 0 00:09:19.613 06:35:38 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.613 06:35:38 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:19.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.613 --rc genhtml_branch_coverage=1 00:09:19.613 --rc genhtml_function_coverage=1 00:09:19.613 --rc genhtml_legend=1 00:09:19.613 --rc geninfo_all_blocks=1 00:09:19.613 --rc geninfo_unexecuted_blocks=1 00:09:19.613 00:09:19.613 ' 00:09:19.613 06:35:38 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:19.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.613 --rc genhtml_branch_coverage=1 00:09:19.613 --rc genhtml_function_coverage=1 00:09:19.613 --rc genhtml_legend=1 00:09:19.613 --rc geninfo_all_blocks=1 00:09:19.613 --rc geninfo_unexecuted_blocks=1 00:09:19.613 00:09:19.613 ' 00:09:19.613 06:35:38 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:19.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.613 --rc genhtml_branch_coverage=1 00:09:19.613 --rc genhtml_function_coverage=1 00:09:19.613 --rc genhtml_legend=1 00:09:19.613 --rc geninfo_all_blocks=1 00:09:19.613 --rc geninfo_unexecuted_blocks=1 00:09:19.613 00:09:19.613 ' 00:09:19.613 06:35:38 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:19.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.613 --rc genhtml_branch_coverage=1 00:09:19.613 --rc genhtml_function_coverage=1 00:09:19.613 --rc genhtml_legend=1 00:09:19.613 --rc geninfo_all_blocks=1 00:09:19.613 --rc geninfo_unexecuted_blocks=1 00:09:19.613 00:09:19.613 ' 00:09:19.613 06:35:38 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:19.613 OK 00:09:19.613 06:35:38 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:19.613 00:09:19.613 real 0m0.288s 00:09:19.613 user 0m0.174s 00:09:19.613 sys 0m0.123s 00:09:19.613 06:35:38 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.613 06:35:38 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:19.613 ************************************ 00:09:19.613 END TEST rpc_client 00:09:19.613 ************************************ 00:09:19.613 06:35:38 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:19.613 06:35:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:19.613 06:35:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.613 06:35:38 -- common/autotest_common.sh@10 -- # set +x 00:09:19.613 ************************************ 00:09:19.613 START TEST json_config 00:09:19.613 ************************************ 00:09:19.613 06:35:38 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:19.613 06:35:38 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:19.613 06:35:38 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:09:19.613 06:35:38 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:19.873 06:35:38 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:19.873 06:35:38 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.873 06:35:38 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.873 06:35:38 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.873 06:35:38 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.873 06:35:38 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.873 06:35:38 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.873 06:35:38 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.873 06:35:38 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.873 06:35:38 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.873 06:35:38 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.873 06:35:38 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.873 06:35:38 json_config -- scripts/common.sh@344 -- # case "$op" in 00:09:19.873 06:35:38 json_config -- scripts/common.sh@345 -- # : 1 00:09:19.873 06:35:38 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.873 06:35:38 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.873 06:35:38 json_config -- scripts/common.sh@365 -- # decimal 1 00:09:19.873 06:35:38 json_config -- scripts/common.sh@353 -- # local d=1 00:09:19.873 06:35:38 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.873 06:35:38 json_config -- scripts/common.sh@355 -- # echo 1 00:09:19.873 06:35:38 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.873 06:35:38 json_config -- scripts/common.sh@366 -- # decimal 2 00:09:19.873 06:35:38 json_config -- scripts/common.sh@353 -- # local d=2 00:09:19.873 06:35:38 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.873 06:35:38 json_config -- scripts/common.sh@355 -- # echo 2 00:09:19.873 06:35:38 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.873 06:35:38 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.873 06:35:38 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.873 06:35:38 json_config -- scripts/common.sh@368 -- # return 0 00:09:19.873 06:35:38 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.873 06:35:38 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:19.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.873 --rc genhtml_branch_coverage=1 00:09:19.873 --rc genhtml_function_coverage=1 00:09:19.873 --rc genhtml_legend=1 00:09:19.873 --rc geninfo_all_blocks=1 00:09:19.873 --rc geninfo_unexecuted_blocks=1 00:09:19.873 00:09:19.873 ' 00:09:19.873 06:35:38 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:19.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.873 --rc genhtml_branch_coverage=1 00:09:19.873 --rc genhtml_function_coverage=1 00:09:19.873 --rc genhtml_legend=1 00:09:19.873 --rc geninfo_all_blocks=1 00:09:19.873 --rc geninfo_unexecuted_blocks=1 00:09:19.873 00:09:19.873 ' 00:09:19.873 06:35:38 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:19.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.873 --rc genhtml_branch_coverage=1 00:09:19.873 --rc genhtml_function_coverage=1 00:09:19.873 --rc genhtml_legend=1 00:09:19.873 --rc geninfo_all_blocks=1 00:09:19.873 --rc geninfo_unexecuted_blocks=1 00:09:19.873 00:09:19.873 ' 00:09:19.873 06:35:38 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:19.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.873 --rc genhtml_branch_coverage=1 00:09:19.873 --rc genhtml_function_coverage=1 00:09:19.873 --rc genhtml_legend=1 00:09:19.873 --rc geninfo_all_blocks=1 00:09:19.873 --rc geninfo_unexecuted_blocks=1 00:09:19.873 00:09:19.873 ' 00:09:19.873 06:35:38 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:19.873 06:35:38 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:19.873 06:35:38 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.873 06:35:38 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.873 06:35:38 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.873 06:35:38 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.873 06:35:38 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.873 06:35:38 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.873 06:35:38 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.873 06:35:38 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.873 06:35:38 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.873 06:35:38 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.873 06:35:38 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e64019f6-f285-443c-9a8b-a61da1f9d2a5 00:09:19.873 06:35:38 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=e64019f6-f285-443c-9a8b-a61da1f9d2a5 00:09:19.873 06:35:38 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.873 06:35:38 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.873 06:35:38 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:19.873 06:35:38 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.873 06:35:38 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:19.873 06:35:38 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:09:19.873 06:35:38 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.873 06:35:38 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.873 06:35:38 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.873 06:35:38 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.874 06:35:38 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.874 06:35:38 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.874 06:35:38 json_config -- paths/export.sh@5 -- # export PATH 00:09:19.874 06:35:38 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.874 06:35:38 json_config -- nvmf/common.sh@51 -- # : 0 00:09:19.874 06:35:38 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:19.874 06:35:38 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:19.874 06:35:38 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.874 06:35:38 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.874 06:35:38 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.874 06:35:38 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:19.874 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:19.874 06:35:38 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:19.874 06:35:38 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:19.874 06:35:38 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:19.874 06:35:38 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:19.874 06:35:38 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:19.874 06:35:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:19.874 06:35:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:19.874 06:35:38 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:19.874 WARNING: No tests are enabled so not running JSON configuration tests 00:09:19.874 06:35:38 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:09:19.874 06:35:38 json_config -- json_config/json_config.sh@28 -- # exit 0 00:09:19.874 00:09:19.874 real 0m0.181s 00:09:19.874 user 0m0.126s 00:09:19.874 sys 0m0.063s 00:09:19.874 06:35:38 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.874 06:35:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:19.874 ************************************ 00:09:19.874 END TEST json_config 00:09:19.874 ************************************ 00:09:19.874 06:35:38 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:19.874 06:35:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:19.874 06:35:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.874 06:35:38 -- common/autotest_common.sh@10 -- # set +x 00:09:19.874 ************************************ 00:09:19.874 START TEST json_config_extra_key 00:09:19.874 ************************************ 00:09:19.874 06:35:38 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:19.874 06:35:38 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:19.874 06:35:38 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:09:19.874 06:35:38 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:20.133 06:35:38 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:09:20.133 06:35:38 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.133 06:35:38 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:20.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.133 --rc genhtml_branch_coverage=1 00:09:20.133 --rc genhtml_function_coverage=1 00:09:20.133 --rc genhtml_legend=1 00:09:20.133 --rc geninfo_all_blocks=1 00:09:20.133 --rc geninfo_unexecuted_blocks=1 00:09:20.133 00:09:20.133 ' 00:09:20.133 06:35:38 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:20.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.133 --rc genhtml_branch_coverage=1 00:09:20.133 --rc genhtml_function_coverage=1 00:09:20.133 --rc genhtml_legend=1 00:09:20.133 --rc geninfo_all_blocks=1 00:09:20.133 --rc geninfo_unexecuted_blocks=1 00:09:20.133 00:09:20.133 ' 00:09:20.133 06:35:38 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:20.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.133 --rc genhtml_branch_coverage=1 00:09:20.133 --rc genhtml_function_coverage=1 00:09:20.133 --rc genhtml_legend=1 00:09:20.133 --rc geninfo_all_blocks=1 00:09:20.133 --rc geninfo_unexecuted_blocks=1 00:09:20.133 00:09:20.133 ' 00:09:20.133 06:35:38 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:20.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.133 --rc genhtml_branch_coverage=1 00:09:20.133 --rc genhtml_function_coverage=1 00:09:20.133 --rc genhtml_legend=1 00:09:20.133 --rc geninfo_all_blocks=1 00:09:20.133 --rc geninfo_unexecuted_blocks=1 00:09:20.133 00:09:20.133 ' 00:09:20.133 06:35:38 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:20.133 06:35:38 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:20.133 06:35:38 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.133 06:35:38 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.133 06:35:38 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.133 06:35:38 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.133 06:35:38 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.133 06:35:38 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.133 06:35:38 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.133 06:35:38 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.133 06:35:38 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.133 06:35:38 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.133 06:35:38 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e64019f6-f285-443c-9a8b-a61da1f9d2a5 00:09:20.133 06:35:38 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=e64019f6-f285-443c-9a8b-a61da1f9d2a5 00:09:20.133 06:35:38 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.133 06:35:38 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.133 06:35:38 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:20.133 06:35:38 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.133 06:35:38 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.133 06:35:38 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.133 06:35:38 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.133 06:35:38 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.133 06:35:38 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.133 06:35:38 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:20.133 06:35:38 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.133 06:35:38 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:09:20.133 06:35:38 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:20.133 06:35:38 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:20.133 06:35:38 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.133 06:35:38 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.133 06:35:38 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.133 06:35:38 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:20.133 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:20.133 06:35:38 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:20.133 06:35:38 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:20.133 06:35:38 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:20.133 06:35:38 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:20.133 06:35:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:20.133 06:35:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:20.133 06:35:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:20.133 06:35:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:20.133 06:35:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:20.133 06:35:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:20.133 06:35:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:09:20.133 06:35:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:20.133 06:35:38 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:20.133 INFO: launching applications... 00:09:20.133 06:35:38 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:20.133 06:35:38 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:20.133 06:35:38 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:20.134 06:35:38 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:20.134 06:35:38 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:20.134 06:35:38 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:20.134 06:35:38 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:20.134 06:35:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:20.134 06:35:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:20.134 06:35:38 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57731 00:09:20.134 06:35:38 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:20.134 Waiting for target to run... 00:09:20.134 06:35:38 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57731 /var/tmp/spdk_tgt.sock 00:09:20.134 06:35:38 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:20.134 06:35:38 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57731 ']' 00:09:20.134 06:35:38 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:20.134 06:35:38 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.134 06:35:38 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:20.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:20.134 06:35:38 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.134 06:35:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:20.134 [2024-12-06 06:35:38.715050] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:09:20.134 [2024-12-06 06:35:38.715247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57731 ] 00:09:20.704 [2024-12-06 06:35:39.206767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.704 [2024-12-06 06:35:39.326724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.655 06:35:40 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.655 00:09:21.655 06:35:40 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:09:21.655 06:35:40 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:21.655 INFO: shutting down applications... 00:09:21.655 06:35:40 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:21.655 06:35:40 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:21.655 06:35:40 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:21.655 06:35:40 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:21.655 06:35:40 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57731 ]] 00:09:21.655 06:35:40 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57731 00:09:21.655 06:35:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:21.655 06:35:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:21.655 06:35:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57731 00:09:21.655 06:35:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:21.913 06:35:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:21.913 06:35:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:21.913 06:35:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57731 00:09:21.913 06:35:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:22.478 06:35:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:22.478 06:35:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:22.478 06:35:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57731 00:09:22.478 06:35:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:23.044 06:35:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:23.044 06:35:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:23.044 06:35:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57731 00:09:23.044 06:35:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:23.611 06:35:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:23.611 06:35:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:23.611 06:35:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57731 00:09:23.611 06:35:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:24.177 06:35:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:24.177 06:35:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:24.177 06:35:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57731 00:09:24.177 06:35:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:24.439 06:35:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:24.439 06:35:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:24.439 06:35:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57731 00:09:24.439 06:35:43 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:24.439 06:35:43 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:24.439 SPDK target shutdown done 00:09:24.439 Success 00:09:24.439 06:35:43 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:24.439 06:35:43 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:24.439 06:35:43 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:24.439 00:09:24.439 real 0m4.672s 00:09:24.439 user 0m4.108s 00:09:24.439 sys 0m0.696s 00:09:24.439 ************************************ 00:09:24.439 END TEST json_config_extra_key 00:09:24.439 ************************************ 00:09:24.439 06:35:43 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.439 06:35:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:24.699 06:35:43 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:24.699 06:35:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:24.699 06:35:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.699 06:35:43 -- common/autotest_common.sh@10 -- # set +x 00:09:24.699 ************************************ 00:09:24.699 START TEST alias_rpc 00:09:24.699 ************************************ 00:09:24.699 06:35:43 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:24.699 * Looking for test storage... 00:09:24.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:09:24.699 06:35:43 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:24.699 06:35:43 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:24.699 06:35:43 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:24.699 06:35:43 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:24.699 06:35:43 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.699 06:35:43 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.699 06:35:43 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.699 06:35:43 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.699 06:35:43 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.699 06:35:43 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.699 06:35:43 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.699 06:35:43 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.699 06:35:43 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.699 06:35:43 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.699 06:35:43 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.699 06:35:43 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:24.699 06:35:43 alias_rpc -- scripts/common.sh@345 -- # : 1 00:09:24.699 06:35:43 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.699 06:35:43 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.699 06:35:43 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:24.699 06:35:43 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:09:24.699 06:35:43 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.699 06:35:43 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:09:24.699 06:35:43 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.699 06:35:43 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:24.699 06:35:43 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:09:24.699 06:35:43 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.699 06:35:43 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:09:24.699 06:35:43 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.699 06:35:43 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.699 06:35:43 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.699 06:35:43 alias_rpc -- scripts/common.sh@368 -- # return 0 00:09:24.699 06:35:43 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.699 06:35:43 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:24.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.699 --rc genhtml_branch_coverage=1 00:09:24.699 --rc genhtml_function_coverage=1 00:09:24.699 --rc genhtml_legend=1 00:09:24.699 --rc geninfo_all_blocks=1 00:09:24.699 --rc geninfo_unexecuted_blocks=1 00:09:24.699 00:09:24.699 ' 00:09:24.699 06:35:43 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:24.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.699 --rc genhtml_branch_coverage=1 00:09:24.699 --rc genhtml_function_coverage=1 00:09:24.699 --rc genhtml_legend=1 00:09:24.699 --rc geninfo_all_blocks=1 00:09:24.699 --rc geninfo_unexecuted_blocks=1 00:09:24.699 00:09:24.699 ' 00:09:24.699 06:35:43 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:24.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.699 --rc genhtml_branch_coverage=1 00:09:24.699 --rc genhtml_function_coverage=1 00:09:24.699 --rc genhtml_legend=1 00:09:24.699 --rc geninfo_all_blocks=1 00:09:24.699 --rc geninfo_unexecuted_blocks=1 00:09:24.699 00:09:24.699 ' 00:09:24.699 06:35:43 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:24.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.699 --rc genhtml_branch_coverage=1 00:09:24.699 --rc genhtml_function_coverage=1 00:09:24.699 --rc genhtml_legend=1 00:09:24.699 --rc geninfo_all_blocks=1 00:09:24.699 --rc geninfo_unexecuted_blocks=1 00:09:24.699 00:09:24.699 ' 00:09:24.699 06:35:43 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:24.699 06:35:43 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57848 00:09:24.699 06:35:43 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:24.699 06:35:43 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57848 00:09:24.699 06:35:43 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57848 ']' 00:09:24.699 06:35:43 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.699 06:35:43 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.699 06:35:43 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.699 06:35:43 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.699 06:35:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.985 [2024-12-06 06:35:43.440066] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:09:24.985 [2024-12-06 06:35:43.440245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57848 ] 00:09:25.244 [2024-12-06 06:35:43.634019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.244 [2024-12-06 06:35:43.797932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.179 06:35:44 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.179 06:35:44 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:26.179 06:35:44 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:09:26.438 06:35:44 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57848 00:09:26.438 06:35:44 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57848 ']' 00:09:26.438 06:35:44 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57848 00:09:26.438 06:35:44 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:09:26.438 06:35:44 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.438 06:35:44 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57848 00:09:26.438 killing process with pid 57848 00:09:26.438 06:35:45 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:26.438 06:35:45 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:26.438 06:35:45 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57848' 00:09:26.438 06:35:45 alias_rpc -- common/autotest_common.sh@973 -- # kill 57848 00:09:26.438 06:35:45 alias_rpc -- common/autotest_common.sh@978 -- # wait 57848 00:09:28.972 ************************************ 00:09:28.972 END TEST alias_rpc 00:09:28.972 ************************************ 00:09:28.972 00:09:28.972 real 0m4.134s 00:09:28.972 user 0m4.240s 00:09:28.972 sys 0m0.661s 00:09:28.972 06:35:47 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.972 06:35:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.972 06:35:47 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:09:28.972 06:35:47 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:28.972 06:35:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:28.972 06:35:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.972 06:35:47 -- common/autotest_common.sh@10 -- # set +x 00:09:28.972 ************************************ 00:09:28.972 START TEST spdkcli_tcp 00:09:28.972 ************************************ 00:09:28.972 06:35:47 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:28.972 * Looking for test storage... 00:09:28.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:09:28.972 06:35:47 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:28.972 06:35:47 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:28.972 06:35:47 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:28.972 06:35:47 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:28.972 06:35:47 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.972 06:35:47 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.972 06:35:47 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.972 06:35:47 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.972 06:35:47 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.972 06:35:47 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.972 06:35:47 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.972 06:35:47 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.972 06:35:47 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.972 06:35:47 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.972 06:35:47 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.972 06:35:47 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:28.972 06:35:47 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:09:28.972 06:35:47 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.972 06:35:47 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.972 06:35:47 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:28.972 06:35:47 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:09:28.972 06:35:47 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.972 06:35:47 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:09:28.972 06:35:47 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.972 06:35:47 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:28.972 06:35:47 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:09:28.972 06:35:47 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.972 06:35:47 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:09:28.972 06:35:47 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.972 06:35:47 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.972 06:35:47 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.972 06:35:47 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:09:28.972 06:35:47 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.972 06:35:47 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:28.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.972 --rc genhtml_branch_coverage=1 00:09:28.972 --rc genhtml_function_coverage=1 00:09:28.972 --rc genhtml_legend=1 00:09:28.972 --rc geninfo_all_blocks=1 00:09:28.972 --rc geninfo_unexecuted_blocks=1 00:09:28.972 00:09:28.972 ' 00:09:28.972 06:35:47 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:28.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.972 --rc genhtml_branch_coverage=1 00:09:28.972 --rc genhtml_function_coverage=1 00:09:28.972 --rc genhtml_legend=1 00:09:28.972 --rc geninfo_all_blocks=1 00:09:28.972 --rc geninfo_unexecuted_blocks=1 00:09:28.972 00:09:28.972 ' 00:09:28.972 06:35:47 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:28.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.972 --rc genhtml_branch_coverage=1 00:09:28.972 --rc genhtml_function_coverage=1 00:09:28.972 --rc genhtml_legend=1 00:09:28.972 --rc geninfo_all_blocks=1 00:09:28.972 --rc geninfo_unexecuted_blocks=1 00:09:28.972 00:09:28.972 ' 00:09:28.972 06:35:47 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:28.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.972 --rc genhtml_branch_coverage=1 00:09:28.972 --rc genhtml_function_coverage=1 00:09:28.972 --rc genhtml_legend=1 00:09:28.972 --rc geninfo_all_blocks=1 00:09:28.972 --rc geninfo_unexecuted_blocks=1 00:09:28.972 00:09:28.972 ' 00:09:28.972 06:35:47 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:09:28.972 06:35:47 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:09:28.972 06:35:47 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:09:28.972 06:35:47 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:28.972 06:35:47 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:28.972 06:35:47 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:28.972 06:35:47 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:28.972 06:35:47 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:28.972 06:35:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:28.972 06:35:47 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57955 00:09:28.972 06:35:47 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:28.972 06:35:47 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57955 00:09:28.972 06:35:47 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57955 ']' 00:09:28.972 06:35:47 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.972 06:35:47 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.972 06:35:47 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.972 06:35:47 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.972 06:35:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:28.972 [2024-12-06 06:35:47.604997] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:09:28.972 [2024-12-06 06:35:47.605485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57955 ] 00:09:29.231 [2024-12-06 06:35:47.791232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:29.490 [2024-12-06 06:35:47.936095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.490 [2024-12-06 06:35:47.936109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.428 06:35:48 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.428 06:35:48 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:09:30.428 06:35:48 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57978 00:09:30.428 06:35:48 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:30.428 06:35:48 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:30.686 [ 00:09:30.686 "bdev_malloc_delete", 00:09:30.686 "bdev_malloc_create", 00:09:30.686 "bdev_null_resize", 00:09:30.686 "bdev_null_delete", 00:09:30.686 "bdev_null_create", 00:09:30.686 "bdev_nvme_cuse_unregister", 00:09:30.686 "bdev_nvme_cuse_register", 00:09:30.686 "bdev_opal_new_user", 00:09:30.686 "bdev_opal_set_lock_state", 00:09:30.686 "bdev_opal_delete", 00:09:30.686 "bdev_opal_get_info", 00:09:30.686 "bdev_opal_create", 00:09:30.686 "bdev_nvme_opal_revert", 00:09:30.686 "bdev_nvme_opal_init", 00:09:30.686 "bdev_nvme_send_cmd", 00:09:30.686 "bdev_nvme_set_keys", 00:09:30.687 "bdev_nvme_get_path_iostat", 00:09:30.687 "bdev_nvme_get_mdns_discovery_info", 00:09:30.687 "bdev_nvme_stop_mdns_discovery", 00:09:30.687 "bdev_nvme_start_mdns_discovery", 00:09:30.687 "bdev_nvme_set_multipath_policy", 00:09:30.687 "bdev_nvme_set_preferred_path", 00:09:30.687 "bdev_nvme_get_io_paths", 00:09:30.687 "bdev_nvme_remove_error_injection", 00:09:30.687 "bdev_nvme_add_error_injection", 00:09:30.687 "bdev_nvme_get_discovery_info", 00:09:30.687 "bdev_nvme_stop_discovery", 00:09:30.687 "bdev_nvme_start_discovery", 00:09:30.687 "bdev_nvme_get_controller_health_info", 00:09:30.687 "bdev_nvme_disable_controller", 00:09:30.687 "bdev_nvme_enable_controller", 00:09:30.687 "bdev_nvme_reset_controller", 00:09:30.687 "bdev_nvme_get_transport_statistics", 00:09:30.687 "bdev_nvme_apply_firmware", 00:09:30.687 "bdev_nvme_detach_controller", 00:09:30.687 "bdev_nvme_get_controllers", 00:09:30.687 "bdev_nvme_attach_controller", 00:09:30.687 "bdev_nvme_set_hotplug", 00:09:30.687 "bdev_nvme_set_options", 00:09:30.687 "bdev_passthru_delete", 00:09:30.687 "bdev_passthru_create", 00:09:30.687 "bdev_lvol_set_parent_bdev", 00:09:30.687 "bdev_lvol_set_parent", 00:09:30.687 "bdev_lvol_check_shallow_copy", 00:09:30.687 "bdev_lvol_start_shallow_copy", 00:09:30.687 "bdev_lvol_grow_lvstore", 00:09:30.687 "bdev_lvol_get_lvols", 00:09:30.687 "bdev_lvol_get_lvstores", 00:09:30.687 "bdev_lvol_delete", 00:09:30.687 "bdev_lvol_set_read_only", 00:09:30.687 "bdev_lvol_resize", 00:09:30.687 "bdev_lvol_decouple_parent", 00:09:30.687 "bdev_lvol_inflate", 00:09:30.687 "bdev_lvol_rename", 00:09:30.687 "bdev_lvol_clone_bdev", 00:09:30.687 "bdev_lvol_clone", 00:09:30.687 "bdev_lvol_snapshot", 00:09:30.687 "bdev_lvol_create", 00:09:30.687 "bdev_lvol_delete_lvstore", 00:09:30.687 "bdev_lvol_rename_lvstore", 00:09:30.687 "bdev_lvol_create_lvstore", 00:09:30.687 "bdev_raid_set_options", 00:09:30.687 "bdev_raid_remove_base_bdev", 00:09:30.687 "bdev_raid_add_base_bdev", 00:09:30.687 "bdev_raid_delete", 00:09:30.687 "bdev_raid_create", 00:09:30.687 "bdev_raid_get_bdevs", 00:09:30.687 "bdev_error_inject_error", 00:09:30.687 "bdev_error_delete", 00:09:30.687 "bdev_error_create", 00:09:30.687 "bdev_split_delete", 00:09:30.687 "bdev_split_create", 00:09:30.687 "bdev_delay_delete", 00:09:30.687 "bdev_delay_create", 00:09:30.687 "bdev_delay_update_latency", 00:09:30.687 "bdev_zone_block_delete", 00:09:30.687 "bdev_zone_block_create", 00:09:30.687 "blobfs_create", 00:09:30.687 "blobfs_detect", 00:09:30.687 "blobfs_set_cache_size", 00:09:30.687 "bdev_aio_delete", 00:09:30.687 "bdev_aio_rescan", 00:09:30.687 "bdev_aio_create", 00:09:30.687 "bdev_ftl_set_property", 00:09:30.687 "bdev_ftl_get_properties", 00:09:30.687 "bdev_ftl_get_stats", 00:09:30.687 "bdev_ftl_unmap", 00:09:30.687 "bdev_ftl_unload", 00:09:30.687 "bdev_ftl_delete", 00:09:30.687 "bdev_ftl_load", 00:09:30.687 "bdev_ftl_create", 00:09:30.687 "bdev_virtio_attach_controller", 00:09:30.687 "bdev_virtio_scsi_get_devices", 00:09:30.687 "bdev_virtio_detach_controller", 00:09:30.687 "bdev_virtio_blk_set_hotplug", 00:09:30.687 "bdev_iscsi_delete", 00:09:30.687 "bdev_iscsi_create", 00:09:30.687 "bdev_iscsi_set_options", 00:09:30.687 "accel_error_inject_error", 00:09:30.687 "ioat_scan_accel_module", 00:09:30.687 "dsa_scan_accel_module", 00:09:30.687 "iaa_scan_accel_module", 00:09:30.687 "keyring_file_remove_key", 00:09:30.687 "keyring_file_add_key", 00:09:30.687 "keyring_linux_set_options", 00:09:30.687 "fsdev_aio_delete", 00:09:30.687 "fsdev_aio_create", 00:09:30.687 "iscsi_get_histogram", 00:09:30.687 "iscsi_enable_histogram", 00:09:30.687 "iscsi_set_options", 00:09:30.687 "iscsi_get_auth_groups", 00:09:30.687 "iscsi_auth_group_remove_secret", 00:09:30.687 "iscsi_auth_group_add_secret", 00:09:30.687 "iscsi_delete_auth_group", 00:09:30.687 "iscsi_create_auth_group", 00:09:30.687 "iscsi_set_discovery_auth", 00:09:30.687 "iscsi_get_options", 00:09:30.687 "iscsi_target_node_request_logout", 00:09:30.687 "iscsi_target_node_set_redirect", 00:09:30.687 "iscsi_target_node_set_auth", 00:09:30.687 "iscsi_target_node_add_lun", 00:09:30.687 "iscsi_get_stats", 00:09:30.687 "iscsi_get_connections", 00:09:30.687 "iscsi_portal_group_set_auth", 00:09:30.687 "iscsi_start_portal_group", 00:09:30.687 "iscsi_delete_portal_group", 00:09:30.687 "iscsi_create_portal_group", 00:09:30.687 "iscsi_get_portal_groups", 00:09:30.687 "iscsi_delete_target_node", 00:09:30.687 "iscsi_target_node_remove_pg_ig_maps", 00:09:30.687 "iscsi_target_node_add_pg_ig_maps", 00:09:30.687 "iscsi_create_target_node", 00:09:30.687 "iscsi_get_target_nodes", 00:09:30.687 "iscsi_delete_initiator_group", 00:09:30.687 "iscsi_initiator_group_remove_initiators", 00:09:30.687 "iscsi_initiator_group_add_initiators", 00:09:30.687 "iscsi_create_initiator_group", 00:09:30.687 "iscsi_get_initiator_groups", 00:09:30.687 "nvmf_set_crdt", 00:09:30.687 "nvmf_set_config", 00:09:30.687 "nvmf_set_max_subsystems", 00:09:30.687 "nvmf_stop_mdns_prr", 00:09:30.687 "nvmf_publish_mdns_prr", 00:09:30.687 "nvmf_subsystem_get_listeners", 00:09:30.687 "nvmf_subsystem_get_qpairs", 00:09:30.687 "nvmf_subsystem_get_controllers", 00:09:30.687 "nvmf_get_stats", 00:09:30.687 "nvmf_get_transports", 00:09:30.687 "nvmf_create_transport", 00:09:30.687 "nvmf_get_targets", 00:09:30.687 "nvmf_delete_target", 00:09:30.687 "nvmf_create_target", 00:09:30.687 "nvmf_subsystem_allow_any_host", 00:09:30.687 "nvmf_subsystem_set_keys", 00:09:30.687 "nvmf_subsystem_remove_host", 00:09:30.687 "nvmf_subsystem_add_host", 00:09:30.687 "nvmf_ns_remove_host", 00:09:30.687 "nvmf_ns_add_host", 00:09:30.687 "nvmf_subsystem_remove_ns", 00:09:30.687 "nvmf_subsystem_set_ns_ana_group", 00:09:30.687 "nvmf_subsystem_add_ns", 00:09:30.687 "nvmf_subsystem_listener_set_ana_state", 00:09:30.687 "nvmf_discovery_get_referrals", 00:09:30.687 "nvmf_discovery_remove_referral", 00:09:30.687 "nvmf_discovery_add_referral", 00:09:30.687 "nvmf_subsystem_remove_listener", 00:09:30.687 "nvmf_subsystem_add_listener", 00:09:30.687 "nvmf_delete_subsystem", 00:09:30.687 "nvmf_create_subsystem", 00:09:30.687 "nvmf_get_subsystems", 00:09:30.687 "env_dpdk_get_mem_stats", 00:09:30.687 "nbd_get_disks", 00:09:30.687 "nbd_stop_disk", 00:09:30.687 "nbd_start_disk", 00:09:30.687 "ublk_recover_disk", 00:09:30.687 "ublk_get_disks", 00:09:30.687 "ublk_stop_disk", 00:09:30.687 "ublk_start_disk", 00:09:30.687 "ublk_destroy_target", 00:09:30.687 "ublk_create_target", 00:09:30.687 "virtio_blk_create_transport", 00:09:30.687 "virtio_blk_get_transports", 00:09:30.687 "vhost_controller_set_coalescing", 00:09:30.687 "vhost_get_controllers", 00:09:30.687 "vhost_delete_controller", 00:09:30.687 "vhost_create_blk_controller", 00:09:30.687 "vhost_scsi_controller_remove_target", 00:09:30.687 "vhost_scsi_controller_add_target", 00:09:30.687 "vhost_start_scsi_controller", 00:09:30.687 "vhost_create_scsi_controller", 00:09:30.687 "thread_set_cpumask", 00:09:30.687 "scheduler_set_options", 00:09:30.687 "framework_get_governor", 00:09:30.687 "framework_get_scheduler", 00:09:30.687 "framework_set_scheduler", 00:09:30.687 "framework_get_reactors", 00:09:30.687 "thread_get_io_channels", 00:09:30.687 "thread_get_pollers", 00:09:30.687 "thread_get_stats", 00:09:30.687 "framework_monitor_context_switch", 00:09:30.687 "spdk_kill_instance", 00:09:30.687 "log_enable_timestamps", 00:09:30.687 "log_get_flags", 00:09:30.687 "log_clear_flag", 00:09:30.687 "log_set_flag", 00:09:30.687 "log_get_level", 00:09:30.687 "log_set_level", 00:09:30.687 "log_get_print_level", 00:09:30.687 "log_set_print_level", 00:09:30.687 "framework_enable_cpumask_locks", 00:09:30.687 "framework_disable_cpumask_locks", 00:09:30.687 "framework_wait_init", 00:09:30.687 "framework_start_init", 00:09:30.687 "scsi_get_devices", 00:09:30.687 "bdev_get_histogram", 00:09:30.687 "bdev_enable_histogram", 00:09:30.687 "bdev_set_qos_limit", 00:09:30.687 "bdev_set_qd_sampling_period", 00:09:30.687 "bdev_get_bdevs", 00:09:30.687 "bdev_reset_iostat", 00:09:30.687 "bdev_get_iostat", 00:09:30.687 "bdev_examine", 00:09:30.687 "bdev_wait_for_examine", 00:09:30.687 "bdev_set_options", 00:09:30.687 "accel_get_stats", 00:09:30.687 "accel_set_options", 00:09:30.687 "accel_set_driver", 00:09:30.687 "accel_crypto_key_destroy", 00:09:30.687 "accel_crypto_keys_get", 00:09:30.687 "accel_crypto_key_create", 00:09:30.687 "accel_assign_opc", 00:09:30.687 "accel_get_module_info", 00:09:30.687 "accel_get_opc_assignments", 00:09:30.687 "vmd_rescan", 00:09:30.687 "vmd_remove_device", 00:09:30.687 "vmd_enable", 00:09:30.687 "sock_get_default_impl", 00:09:30.687 "sock_set_default_impl", 00:09:30.687 "sock_impl_set_options", 00:09:30.687 "sock_impl_get_options", 00:09:30.687 "iobuf_get_stats", 00:09:30.687 "iobuf_set_options", 00:09:30.687 "keyring_get_keys", 00:09:30.687 "framework_get_pci_devices", 00:09:30.687 "framework_get_config", 00:09:30.687 "framework_get_subsystems", 00:09:30.687 "fsdev_set_opts", 00:09:30.687 "fsdev_get_opts", 00:09:30.687 "trace_get_info", 00:09:30.687 "trace_get_tpoint_group_mask", 00:09:30.687 "trace_disable_tpoint_group", 00:09:30.687 "trace_enable_tpoint_group", 00:09:30.687 "trace_clear_tpoint_mask", 00:09:30.687 "trace_set_tpoint_mask", 00:09:30.687 "notify_get_notifications", 00:09:30.688 "notify_get_types", 00:09:30.688 "spdk_get_version", 00:09:30.688 "rpc_get_methods" 00:09:30.688 ] 00:09:30.688 06:35:49 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:30.688 06:35:49 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:30.688 06:35:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:30.946 06:35:49 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:30.946 06:35:49 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57955 00:09:30.946 06:35:49 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57955 ']' 00:09:30.946 06:35:49 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57955 00:09:30.946 06:35:49 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:09:30.946 06:35:49 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.946 06:35:49 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57955 00:09:30.946 killing process with pid 57955 00:09:30.946 06:35:49 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:30.946 06:35:49 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:30.946 06:35:49 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57955' 00:09:30.946 06:35:49 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57955 00:09:30.946 06:35:49 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57955 00:09:33.479 ************************************ 00:09:33.479 END TEST spdkcli_tcp 00:09:33.479 ************************************ 00:09:33.479 00:09:33.479 real 0m4.561s 00:09:33.479 user 0m8.238s 00:09:33.479 sys 0m0.815s 00:09:33.479 06:35:51 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.479 06:35:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:33.479 06:35:51 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:33.479 06:35:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.479 06:35:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.479 06:35:51 -- common/autotest_common.sh@10 -- # set +x 00:09:33.479 ************************************ 00:09:33.479 START TEST dpdk_mem_utility 00:09:33.479 ************************************ 00:09:33.479 06:35:51 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:33.479 * Looking for test storage... 00:09:33.479 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:33.479 06:35:52 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:33.479 06:35:52 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:33.479 06:35:52 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:09:33.479 06:35:52 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:33.479 06:35:52 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.479 06:35:52 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.479 06:35:52 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.479 06:35:52 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.479 06:35:52 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.479 06:35:52 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.479 06:35:52 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.479 06:35:52 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.479 06:35:52 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.479 06:35:52 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.479 06:35:52 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.479 06:35:52 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:09:33.479 06:35:52 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:09:33.479 06:35:52 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.479 06:35:52 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.479 06:35:52 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:09:33.479 06:35:52 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:09:33.479 06:35:52 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.479 06:35:52 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:09:33.479 06:35:52 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.479 06:35:52 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:09:33.479 06:35:52 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:09:33.479 06:35:52 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.479 06:35:52 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:09:33.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.479 06:35:52 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.479 06:35:52 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.479 06:35:52 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.479 06:35:52 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:09:33.479 06:35:52 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.479 06:35:52 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:33.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.479 --rc genhtml_branch_coverage=1 00:09:33.479 --rc genhtml_function_coverage=1 00:09:33.479 --rc genhtml_legend=1 00:09:33.479 --rc geninfo_all_blocks=1 00:09:33.479 --rc geninfo_unexecuted_blocks=1 00:09:33.479 00:09:33.479 ' 00:09:33.479 06:35:52 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:33.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.479 --rc genhtml_branch_coverage=1 00:09:33.479 --rc genhtml_function_coverage=1 00:09:33.479 --rc genhtml_legend=1 00:09:33.479 --rc geninfo_all_blocks=1 00:09:33.479 --rc geninfo_unexecuted_blocks=1 00:09:33.479 00:09:33.479 ' 00:09:33.479 06:35:52 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:33.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.479 --rc genhtml_branch_coverage=1 00:09:33.479 --rc genhtml_function_coverage=1 00:09:33.479 --rc genhtml_legend=1 00:09:33.479 --rc geninfo_all_blocks=1 00:09:33.479 --rc geninfo_unexecuted_blocks=1 00:09:33.479 00:09:33.479 ' 00:09:33.479 06:35:52 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:33.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.479 --rc genhtml_branch_coverage=1 00:09:33.479 --rc genhtml_function_coverage=1 00:09:33.479 --rc genhtml_legend=1 00:09:33.479 --rc geninfo_all_blocks=1 00:09:33.479 --rc geninfo_unexecuted_blocks=1 00:09:33.479 00:09:33.479 ' 00:09:33.479 06:35:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:33.479 06:35:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58083 00:09:33.479 06:35:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58083 00:09:33.479 06:35:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:33.479 06:35:52 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58083 ']' 00:09:33.479 06:35:52 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.479 06:35:52 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.479 06:35:52 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.479 06:35:52 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.479 06:35:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:33.738 [2024-12-06 06:35:52.222059] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:09:33.738 [2024-12-06 06:35:52.223228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58083 ] 00:09:33.996 [2024-12-06 06:35:52.420929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.996 [2024-12-06 06:35:52.595667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.373 06:35:53 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.373 06:35:53 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:09:35.373 06:35:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:35.373 06:35:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:35.373 06:35:53 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.373 06:35:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:35.373 { 00:09:35.373 "filename": "/tmp/spdk_mem_dump.txt" 00:09:35.373 } 00:09:35.373 06:35:53 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.373 06:35:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:35.373 DPDK memory size 824.000000 MiB in 1 heap(s) 00:09:35.373 1 heaps totaling size 824.000000 MiB 00:09:35.373 size: 824.000000 MiB heap id: 0 00:09:35.373 end heaps---------- 00:09:35.373 9 mempools totaling size 603.782043 MiB 00:09:35.373 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:35.373 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:35.373 size: 100.555481 MiB name: bdev_io_58083 00:09:35.373 size: 50.003479 MiB name: msgpool_58083 00:09:35.373 size: 36.509338 MiB name: fsdev_io_58083 00:09:35.373 size: 21.763794 MiB name: PDU_Pool 00:09:35.373 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:35.373 size: 4.133484 MiB name: evtpool_58083 00:09:35.373 size: 0.026123 MiB name: Session_Pool 00:09:35.373 end mempools------- 00:09:35.373 6 memzones totaling size 4.142822 MiB 00:09:35.373 size: 1.000366 MiB name: RG_ring_0_58083 00:09:35.373 size: 1.000366 MiB name: RG_ring_1_58083 00:09:35.373 size: 1.000366 MiB name: RG_ring_4_58083 00:09:35.373 size: 1.000366 MiB name: RG_ring_5_58083 00:09:35.373 size: 0.125366 MiB name: RG_ring_2_58083 00:09:35.373 size: 0.015991 MiB name: RG_ring_3_58083 00:09:35.373 end memzones------- 00:09:35.373 06:35:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:35.373 heap id: 0 total size: 824.000000 MiB number of busy elements: 324 number of free elements: 18 00:09:35.373 list of free elements. size: 16.779175 MiB 00:09:35.373 element at address: 0x200006400000 with size: 1.995972 MiB 00:09:35.373 element at address: 0x20000a600000 with size: 1.995972 MiB 00:09:35.373 element at address: 0x200003e00000 with size: 1.991028 MiB 00:09:35.373 element at address: 0x200019500040 with size: 0.999939 MiB 00:09:35.373 element at address: 0x200019900040 with size: 0.999939 MiB 00:09:35.373 element at address: 0x200019a00000 with size: 0.999084 MiB 00:09:35.373 element at address: 0x200032600000 with size: 0.994324 MiB 00:09:35.373 element at address: 0x200000400000 with size: 0.992004 MiB 00:09:35.373 element at address: 0x200019200000 with size: 0.959656 MiB 00:09:35.373 element at address: 0x200019d00040 with size: 0.936401 MiB 00:09:35.373 element at address: 0x200000200000 with size: 0.716980 MiB 00:09:35.373 element at address: 0x20001b400000 with size: 0.560486 MiB 00:09:35.373 element at address: 0x200000c00000 with size: 0.489197 MiB 00:09:35.373 element at address: 0x200019600000 with size: 0.487976 MiB 00:09:35.373 element at address: 0x200019e00000 with size: 0.485413 MiB 00:09:35.373 element at address: 0x200012c00000 with size: 0.433472 MiB 00:09:35.373 element at address: 0x200028800000 with size: 0.390442 MiB 00:09:35.373 element at address: 0x200000800000 with size: 0.350891 MiB 00:09:35.373 list of standard malloc elements. size: 199.289917 MiB 00:09:35.373 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:09:35.373 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:09:35.373 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:09:35.373 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:09:35.373 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:09:35.373 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:09:35.373 element at address: 0x200019deff40 with size: 0.062683 MiB 00:09:35.373 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:09:35.373 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:09:35.373 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:09:35.373 element at address: 0x200012bff040 with size: 0.000305 MiB 00:09:35.373 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:09:35.373 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:09:35.373 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:09:35.373 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:09:35.373 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:09:35.373 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:09:35.373 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:09:35.373 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:09:35.373 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:09:35.373 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:09:35.373 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:09:35.373 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:09:35.373 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:09:35.373 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:09:35.374 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200000cff000 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200012bff180 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200012bff280 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200012bff380 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200012bff480 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200012bff580 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200012bff680 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200012bff780 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200012bff880 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200012bff980 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200019affc40 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200028863f40 with size: 0.000244 MiB 00:09:35.374 element at address: 0x200028864040 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886af80 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886b080 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886b180 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886b280 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886b380 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886b480 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886b580 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886b680 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886b780 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886b880 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886b980 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886be80 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886c080 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886c180 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886c280 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886c380 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886c480 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886c580 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886c680 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886c780 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886c880 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886c980 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886d080 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886d180 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886d280 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886d380 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886d480 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886d580 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886d680 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886d780 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886d880 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886d980 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886da80 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886db80 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886de80 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886df80 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886e080 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886e180 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886e280 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886e380 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886e480 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886e580 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886e680 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886e780 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886e880 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886e980 with size: 0.000244 MiB 00:09:35.374 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:09:35.375 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:09:35.375 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:09:35.375 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:09:35.375 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:09:35.375 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:09:35.375 element at address: 0x20002886f080 with size: 0.000244 MiB 00:09:35.375 element at address: 0x20002886f180 with size: 0.000244 MiB 00:09:35.375 element at address: 0x20002886f280 with size: 0.000244 MiB 00:09:35.375 element at address: 0x20002886f380 with size: 0.000244 MiB 00:09:35.375 element at address: 0x20002886f480 with size: 0.000244 MiB 00:09:35.375 element at address: 0x20002886f580 with size: 0.000244 MiB 00:09:35.375 element at address: 0x20002886f680 with size: 0.000244 MiB 00:09:35.375 element at address: 0x20002886f780 with size: 0.000244 MiB 00:09:35.375 element at address: 0x20002886f880 with size: 0.000244 MiB 00:09:35.375 element at address: 0x20002886f980 with size: 0.000244 MiB 00:09:35.375 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:09:35.375 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:09:35.375 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:09:35.375 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:09:35.375 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:09:35.375 list of memzone associated elements. size: 607.930908 MiB 00:09:35.375 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:09:35.375 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:35.375 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:09:35.375 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:35.375 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:09:35.375 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58083_0 00:09:35.375 element at address: 0x200000dff340 with size: 48.003113 MiB 00:09:35.375 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58083_0 00:09:35.375 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:09:35.375 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58083_0 00:09:35.375 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:09:35.375 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:35.375 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:09:35.375 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:35.375 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:09:35.375 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58083_0 00:09:35.375 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:09:35.375 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58083 00:09:35.375 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:09:35.375 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58083 00:09:35.375 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:09:35.375 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:35.375 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:09:35.375 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:35.375 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:09:35.375 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:35.375 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:09:35.375 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:35.375 element at address: 0x200000cff100 with size: 1.000549 MiB 00:09:35.375 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58083 00:09:35.375 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:09:35.375 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58083 00:09:35.375 element at address: 0x200019affd40 with size: 1.000549 MiB 00:09:35.375 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58083 00:09:35.375 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:09:35.375 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58083 00:09:35.375 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:09:35.375 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58083 00:09:35.375 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:09:35.375 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58083 00:09:35.375 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:09:35.375 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:35.375 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:09:35.375 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:35.375 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:09:35.375 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:35.375 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:09:35.375 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58083 00:09:35.375 element at address: 0x20000085df80 with size: 0.125549 MiB 00:09:35.375 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58083 00:09:35.375 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:09:35.375 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:35.375 element at address: 0x200028864140 with size: 0.023804 MiB 00:09:35.375 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:35.375 element at address: 0x200000859d40 with size: 0.016174 MiB 00:09:35.375 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58083 00:09:35.375 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:09:35.375 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:35.375 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:09:35.375 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58083 00:09:35.375 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:09:35.375 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58083 00:09:35.375 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:09:35.375 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58083 00:09:35.375 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:09:35.375 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:35.375 06:35:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:35.375 06:35:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58083 00:09:35.375 06:35:53 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58083 ']' 00:09:35.375 06:35:53 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58083 00:09:35.375 06:35:53 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:09:35.375 06:35:53 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.375 06:35:53 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58083 00:09:35.375 killing process with pid 58083 00:09:35.375 06:35:53 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.375 06:35:53 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.375 06:35:53 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58083' 00:09:35.375 06:35:53 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58083 00:09:35.375 06:35:53 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58083 00:09:37.903 ************************************ 00:09:37.903 END TEST dpdk_mem_utility 00:09:37.903 ************************************ 00:09:37.903 00:09:37.903 real 0m4.189s 00:09:37.904 user 0m4.124s 00:09:37.904 sys 0m0.726s 00:09:37.904 06:35:56 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.904 06:35:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:37.904 06:35:56 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:37.904 06:35:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.904 06:35:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.904 06:35:56 -- common/autotest_common.sh@10 -- # set +x 00:09:37.904 ************************************ 00:09:37.904 START TEST event 00:09:37.904 ************************************ 00:09:37.904 06:35:56 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:37.904 * Looking for test storage... 00:09:37.904 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:37.904 06:35:56 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:37.904 06:35:56 event -- common/autotest_common.sh@1711 -- # lcov --version 00:09:37.904 06:35:56 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:37.904 06:35:56 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:37.904 06:35:56 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.904 06:35:56 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.904 06:35:56 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.904 06:35:56 event -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.904 06:35:56 event -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.904 06:35:56 event -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.904 06:35:56 event -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.904 06:35:56 event -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.904 06:35:56 event -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.904 06:35:56 event -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.904 06:35:56 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.904 06:35:56 event -- scripts/common.sh@344 -- # case "$op" in 00:09:37.904 06:35:56 event -- scripts/common.sh@345 -- # : 1 00:09:37.904 06:35:56 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.904 06:35:56 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.904 06:35:56 event -- scripts/common.sh@365 -- # decimal 1 00:09:37.904 06:35:56 event -- scripts/common.sh@353 -- # local d=1 00:09:37.904 06:35:56 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.904 06:35:56 event -- scripts/common.sh@355 -- # echo 1 00:09:37.904 06:35:56 event -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.904 06:35:56 event -- scripts/common.sh@366 -- # decimal 2 00:09:37.904 06:35:56 event -- scripts/common.sh@353 -- # local d=2 00:09:37.904 06:35:56 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.904 06:35:56 event -- scripts/common.sh@355 -- # echo 2 00:09:37.904 06:35:56 event -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.904 06:35:56 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.904 06:35:56 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.904 06:35:56 event -- scripts/common.sh@368 -- # return 0 00:09:37.904 06:35:56 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.904 06:35:56 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:37.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.904 --rc genhtml_branch_coverage=1 00:09:37.904 --rc genhtml_function_coverage=1 00:09:37.904 --rc genhtml_legend=1 00:09:37.904 --rc geninfo_all_blocks=1 00:09:37.904 --rc geninfo_unexecuted_blocks=1 00:09:37.904 00:09:37.904 ' 00:09:37.904 06:35:56 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:37.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.904 --rc genhtml_branch_coverage=1 00:09:37.904 --rc genhtml_function_coverage=1 00:09:37.904 --rc genhtml_legend=1 00:09:37.904 --rc geninfo_all_blocks=1 00:09:37.904 --rc geninfo_unexecuted_blocks=1 00:09:37.904 00:09:37.904 ' 00:09:37.904 06:35:56 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:37.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.904 --rc genhtml_branch_coverage=1 00:09:37.904 --rc genhtml_function_coverage=1 00:09:37.904 --rc genhtml_legend=1 00:09:37.904 --rc geninfo_all_blocks=1 00:09:37.904 --rc geninfo_unexecuted_blocks=1 00:09:37.904 00:09:37.904 ' 00:09:37.904 06:35:56 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:37.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.904 --rc genhtml_branch_coverage=1 00:09:37.904 --rc genhtml_function_coverage=1 00:09:37.904 --rc genhtml_legend=1 00:09:37.904 --rc geninfo_all_blocks=1 00:09:37.904 --rc geninfo_unexecuted_blocks=1 00:09:37.904 00:09:37.904 ' 00:09:37.904 06:35:56 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:37.904 06:35:56 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:37.904 06:35:56 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:37.904 06:35:56 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:37.904 06:35:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.904 06:35:56 event -- common/autotest_common.sh@10 -- # set +x 00:09:37.904 ************************************ 00:09:37.904 START TEST event_perf 00:09:37.904 ************************************ 00:09:37.904 06:35:56 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:37.904 Running I/O for 1 seconds...[2024-12-06 06:35:56.382469] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:09:37.904 [2024-12-06 06:35:56.382818] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58191 ] 00:09:38.162 [2024-12-06 06:35:56.557654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:38.162 [2024-12-06 06:35:56.701971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.162 [2024-12-06 06:35:56.702091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.162 [2024-12-06 06:35:56.702228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.162 Running I/O for 1 seconds...[2024-12-06 06:35:56.702243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:39.535 00:09:39.535 lcore 0: 194591 00:09:39.535 lcore 1: 194591 00:09:39.535 lcore 2: 194592 00:09:39.535 lcore 3: 194592 00:09:39.535 done. 00:09:39.535 00:09:39.535 real 0m1.614s 00:09:39.535 user 0m4.365s 00:09:39.535 sys 0m0.119s 00:09:39.535 06:35:57 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.535 ************************************ 00:09:39.535 END TEST event_perf 00:09:39.535 ************************************ 00:09:39.535 06:35:57 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:39.535 06:35:57 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:39.535 06:35:57 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:39.535 06:35:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.535 06:35:57 event -- common/autotest_common.sh@10 -- # set +x 00:09:39.535 ************************************ 00:09:39.535 START TEST event_reactor 00:09:39.535 ************************************ 00:09:39.535 06:35:58 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:39.535 [2024-12-06 06:35:58.058703] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:09:39.535 [2024-12-06 06:35:58.058890] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58230 ] 00:09:39.798 [2024-12-06 06:35:58.244925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.798 [2024-12-06 06:35:58.378188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.175 test_start 00:09:41.175 oneshot 00:09:41.175 tick 100 00:09:41.175 tick 100 00:09:41.175 tick 250 00:09:41.175 tick 100 00:09:41.175 tick 100 00:09:41.175 tick 100 00:09:41.175 tick 250 00:09:41.175 tick 500 00:09:41.175 tick 100 00:09:41.175 tick 100 00:09:41.175 tick 250 00:09:41.175 tick 100 00:09:41.175 tick 100 00:09:41.175 test_end 00:09:41.175 00:09:41.175 real 0m1.621s 00:09:41.175 user 0m1.418s 00:09:41.175 sys 0m0.092s 00:09:41.175 06:35:59 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.175 ************************************ 00:09:41.175 END TEST event_reactor 00:09:41.175 06:35:59 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:41.175 ************************************ 00:09:41.175 06:35:59 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:41.175 06:35:59 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:41.175 06:35:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.175 06:35:59 event -- common/autotest_common.sh@10 -- # set +x 00:09:41.175 ************************************ 00:09:41.175 START TEST event_reactor_perf 00:09:41.175 ************************************ 00:09:41.175 06:35:59 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:41.175 [2024-12-06 06:35:59.742246] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:09:41.175 [2024-12-06 06:35:59.742792] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58272 ] 00:09:41.434 [2024-12-06 06:35:59.940976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.693 [2024-12-06 06:36:00.102494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.070 test_start 00:09:43.070 test_end 00:09:43.070 Performance: 262740 events per second 00:09:43.070 ************************************ 00:09:43.070 END TEST event_reactor_perf 00:09:43.070 ************************************ 00:09:43.070 00:09:43.070 real 0m1.663s 00:09:43.070 user 0m1.429s 00:09:43.070 sys 0m0.122s 00:09:43.070 06:36:01 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.070 06:36:01 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:43.070 06:36:01 event -- event/event.sh@49 -- # uname -s 00:09:43.070 06:36:01 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:43.070 06:36:01 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:43.070 06:36:01 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:43.070 06:36:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.070 06:36:01 event -- common/autotest_common.sh@10 -- # set +x 00:09:43.070 ************************************ 00:09:43.070 START TEST event_scheduler 00:09:43.070 ************************************ 00:09:43.070 06:36:01 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:43.070 * Looking for test storage... 00:09:43.070 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:43.070 06:36:01 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:43.070 06:36:01 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:09:43.070 06:36:01 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:43.070 06:36:01 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:43.070 06:36:01 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.070 06:36:01 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.070 06:36:01 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.070 06:36:01 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.070 06:36:01 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.070 06:36:01 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.070 06:36:01 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.070 06:36:01 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.070 06:36:01 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.070 06:36:01 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.070 06:36:01 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.070 06:36:01 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:09:43.070 06:36:01 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:09:43.070 06:36:01 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.070 06:36:01 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.070 06:36:01 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:09:43.070 06:36:01 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:09:43.070 06:36:01 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.070 06:36:01 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:09:43.070 06:36:01 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.070 06:36:01 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:09:43.070 06:36:01 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:09:43.070 06:36:01 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.070 06:36:01 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:09:43.070 06:36:01 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.070 06:36:01 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.070 06:36:01 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.070 06:36:01 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:09:43.070 06:36:01 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.070 06:36:01 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:43.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.070 --rc genhtml_branch_coverage=1 00:09:43.070 --rc genhtml_function_coverage=1 00:09:43.070 --rc genhtml_legend=1 00:09:43.070 --rc geninfo_all_blocks=1 00:09:43.070 --rc geninfo_unexecuted_blocks=1 00:09:43.070 00:09:43.070 ' 00:09:43.070 06:36:01 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:43.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.070 --rc genhtml_branch_coverage=1 00:09:43.070 --rc genhtml_function_coverage=1 00:09:43.070 --rc genhtml_legend=1 00:09:43.070 --rc geninfo_all_blocks=1 00:09:43.070 --rc geninfo_unexecuted_blocks=1 00:09:43.070 00:09:43.070 ' 00:09:43.070 06:36:01 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:43.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.070 --rc genhtml_branch_coverage=1 00:09:43.070 --rc genhtml_function_coverage=1 00:09:43.070 --rc genhtml_legend=1 00:09:43.070 --rc geninfo_all_blocks=1 00:09:43.070 --rc geninfo_unexecuted_blocks=1 00:09:43.070 00:09:43.070 ' 00:09:43.070 06:36:01 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:43.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.070 --rc genhtml_branch_coverage=1 00:09:43.070 --rc genhtml_function_coverage=1 00:09:43.070 --rc genhtml_legend=1 00:09:43.070 --rc geninfo_all_blocks=1 00:09:43.070 --rc geninfo_unexecuted_blocks=1 00:09:43.070 00:09:43.070 ' 00:09:43.070 06:36:01 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:43.070 06:36:01 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58343 00:09:43.070 06:36:01 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:43.070 06:36:01 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:43.070 06:36:01 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58343 00:09:43.070 06:36:01 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58343 ']' 00:09:43.070 06:36:01 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.070 06:36:01 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.070 06:36:01 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.070 06:36:01 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.070 06:36:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:43.070 [2024-12-06 06:36:01.712677] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:09:43.070 [2024-12-06 06:36:01.713102] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58343 ] 00:09:43.329 [2024-12-06 06:36:01.903670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:43.588 [2024-12-06 06:36:02.078476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.588 [2024-12-06 06:36:02.078585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.588 [2024-12-06 06:36:02.078747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:43.588 [2024-12-06 06:36:02.078758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:44.156 06:36:02 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.156 06:36:02 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:09:44.156 06:36:02 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:44.156 06:36:02 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.156 06:36:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:44.156 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:44.156 POWER: Cannot set governor of lcore 0 to userspace 00:09:44.156 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:44.156 POWER: Cannot set governor of lcore 0 to performance 00:09:44.156 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:44.156 POWER: Cannot set governor of lcore 0 to userspace 00:09:44.156 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:44.156 POWER: Cannot set governor of lcore 0 to userspace 00:09:44.156 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:09:44.156 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:44.156 POWER: Unable to set Power Management Environment for lcore 0 00:09:44.156 [2024-12-06 06:36:02.721727] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:09:44.156 [2024-12-06 06:36:02.721755] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:09:44.156 [2024-12-06 06:36:02.721770] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:09:44.156 [2024-12-06 06:36:02.721796] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:44.156 [2024-12-06 06:36:02.721807] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:44.156 [2024-12-06 06:36:02.721821] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:44.156 06:36:02 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.156 06:36:02 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:44.156 06:36:02 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.156 06:36:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:44.726 [2024-12-06 06:36:03.064686] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:44.726 06:36:03 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.726 06:36:03 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:44.726 06:36:03 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:44.726 06:36:03 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.726 06:36:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:44.726 ************************************ 00:09:44.726 START TEST scheduler_create_thread 00:09:44.726 ************************************ 00:09:44.726 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:09:44.726 06:36:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:44.726 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.726 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:44.726 2 00:09:44.726 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.726 06:36:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:44.726 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.726 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:44.726 3 00:09:44.726 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.726 06:36:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:44.726 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.726 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:44.726 4 00:09:44.726 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.726 06:36:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:44.726 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.726 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:44.726 5 00:09:44.726 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.726 06:36:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:44.726 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.726 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:44.726 6 00:09:44.726 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.726 06:36:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:44.726 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.726 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:44.726 7 00:09:44.726 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.726 06:36:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:44.726 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.726 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:44.726 8 00:09:44.726 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.726 06:36:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:44.727 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.727 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:44.727 9 00:09:44.727 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.727 06:36:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:44.727 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.727 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:44.727 10 00:09:44.727 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.727 06:36:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:44.727 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.727 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:44.727 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.727 06:36:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:44.727 06:36:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:44.727 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.727 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:45.294 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.294 06:36:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:45.294 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.294 06:36:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:46.714 06:36:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.714 06:36:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:46.714 06:36:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:46.714 06:36:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.714 06:36:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:47.651 06:36:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.651 00:09:47.651 ************************************ 00:09:47.651 END TEST scheduler_create_thread 00:09:47.651 ************************************ 00:09:47.651 real 0m3.106s 00:09:47.651 user 0m0.020s 00:09:47.651 sys 0m0.007s 00:09:47.651 06:36:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.651 06:36:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:47.651 06:36:06 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:47.651 06:36:06 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58343 00:09:47.651 06:36:06 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58343 ']' 00:09:47.651 06:36:06 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58343 00:09:47.651 06:36:06 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:09:47.651 06:36:06 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.651 06:36:06 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58343 00:09:47.651 killing process with pid 58343 00:09:47.651 06:36:06 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:47.651 06:36:06 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:47.651 06:36:06 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58343' 00:09:47.651 06:36:06 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58343 00:09:47.651 06:36:06 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58343 00:09:48.219 [2024-12-06 06:36:06.564820] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:49.182 00:09:49.182 real 0m6.292s 00:09:49.182 user 0m12.723s 00:09:49.182 sys 0m0.517s 00:09:49.182 06:36:07 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.182 ************************************ 00:09:49.182 END TEST event_scheduler 00:09:49.182 ************************************ 00:09:49.182 06:36:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:49.182 06:36:07 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:49.182 06:36:07 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:49.182 06:36:07 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:49.183 06:36:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.183 06:36:07 event -- common/autotest_common.sh@10 -- # set +x 00:09:49.183 ************************************ 00:09:49.183 START TEST app_repeat 00:09:49.183 ************************************ 00:09:49.183 06:36:07 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:09:49.183 06:36:07 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:49.183 06:36:07 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:49.183 06:36:07 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:49.183 06:36:07 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:49.183 06:36:07 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:49.183 06:36:07 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:49.183 06:36:07 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:49.183 Process app_repeat pid: 58460 00:09:49.183 spdk_app_start Round 0 00:09:49.183 06:36:07 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58460 00:09:49.183 06:36:07 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:49.183 06:36:07 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58460' 00:09:49.183 06:36:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:49.183 06:36:07 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:49.183 06:36:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:49.183 06:36:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58460 /var/tmp/spdk-nbd.sock 00:09:49.183 06:36:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58460 ']' 00:09:49.183 06:36:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:49.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:49.183 06:36:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.183 06:36:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:49.183 06:36:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.183 06:36:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:49.508 [2024-12-06 06:36:07.828682] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:09:49.508 [2024-12-06 06:36:07.828876] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58460 ] 00:09:49.508 [2024-12-06 06:36:08.017175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:49.767 [2024-12-06 06:36:08.152492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.767 [2024-12-06 06:36:08.152493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.334 06:36:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.334 06:36:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:50.334 06:36:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:50.592 Malloc0 00:09:50.849 06:36:09 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:51.108 Malloc1 00:09:51.108 06:36:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:51.108 06:36:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:51.108 06:36:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:51.108 06:36:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:51.108 06:36:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:51.108 06:36:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:51.108 06:36:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:51.108 06:36:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:51.108 06:36:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:51.108 06:36:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:51.108 06:36:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:51.108 06:36:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:51.108 06:36:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:51.108 06:36:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:51.108 06:36:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:51.108 06:36:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:51.366 /dev/nbd0 00:09:51.366 06:36:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:51.366 06:36:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:51.366 06:36:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:51.366 06:36:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:51.366 06:36:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:51.366 06:36:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:51.366 06:36:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:51.366 06:36:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:51.366 06:36:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:51.366 06:36:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:51.366 06:36:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:51.366 1+0 records in 00:09:51.366 1+0 records out 00:09:51.366 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436932 s, 9.4 MB/s 00:09:51.366 06:36:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:51.366 06:36:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:51.366 06:36:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:51.366 06:36:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:51.366 06:36:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:51.366 06:36:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:51.366 06:36:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:51.366 06:36:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:51.623 /dev/nbd1 00:09:51.623 06:36:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:51.623 06:36:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:51.623 06:36:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:51.623 06:36:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:51.623 06:36:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:51.624 06:36:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:51.624 06:36:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:51.624 06:36:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:51.624 06:36:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:51.624 06:36:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:51.624 06:36:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:51.624 1+0 records in 00:09:51.624 1+0 records out 00:09:51.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369555 s, 11.1 MB/s 00:09:51.624 06:36:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:51.624 06:36:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:51.624 06:36:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:51.624 06:36:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:51.624 06:36:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:51.624 06:36:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:51.624 06:36:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:51.624 06:36:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:51.624 06:36:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:51.624 06:36:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:52.191 { 00:09:52.191 "nbd_device": "/dev/nbd0", 00:09:52.191 "bdev_name": "Malloc0" 00:09:52.191 }, 00:09:52.191 { 00:09:52.191 "nbd_device": "/dev/nbd1", 00:09:52.191 "bdev_name": "Malloc1" 00:09:52.191 } 00:09:52.191 ]' 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:52.191 { 00:09:52.191 "nbd_device": "/dev/nbd0", 00:09:52.191 "bdev_name": "Malloc0" 00:09:52.191 }, 00:09:52.191 { 00:09:52.191 "nbd_device": "/dev/nbd1", 00:09:52.191 "bdev_name": "Malloc1" 00:09:52.191 } 00:09:52.191 ]' 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:52.191 /dev/nbd1' 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:52.191 /dev/nbd1' 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:52.191 256+0 records in 00:09:52.191 256+0 records out 00:09:52.191 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00671728 s, 156 MB/s 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:52.191 256+0 records in 00:09:52.191 256+0 records out 00:09:52.191 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0296847 s, 35.3 MB/s 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:52.191 256+0 records in 00:09:52.191 256+0 records out 00:09:52.191 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0308957 s, 33.9 MB/s 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:52.191 06:36:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:52.449 06:36:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:52.449 06:36:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:52.449 06:36:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:52.449 06:36:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:52.450 06:36:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:52.450 06:36:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:52.450 06:36:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:52.450 06:36:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:52.450 06:36:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:52.450 06:36:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:52.707 06:36:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:52.707 06:36:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:52.707 06:36:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:52.707 06:36:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:52.707 06:36:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:52.707 06:36:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:52.707 06:36:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:52.707 06:36:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:52.707 06:36:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:52.707 06:36:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.707 06:36:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:52.965 06:36:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:52.965 06:36:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:52.965 06:36:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:53.222 06:36:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:53.222 06:36:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:53.222 06:36:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:53.222 06:36:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:53.222 06:36:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:53.222 06:36:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:53.222 06:36:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:53.222 06:36:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:53.222 06:36:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:53.222 06:36:11 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:53.789 06:36:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:54.731 [2024-12-06 06:36:13.195191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:54.732 [2024-12-06 06:36:13.319547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.732 [2024-12-06 06:36:13.319625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.991 [2024-12-06 06:36:13.513001] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:54.991 [2024-12-06 06:36:13.513098] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:56.894 06:36:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:56.894 spdk_app_start Round 1 00:09:56.894 06:36:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:56.894 06:36:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58460 /var/tmp/spdk-nbd.sock 00:09:56.894 06:36:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58460 ']' 00:09:56.894 06:36:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:56.894 06:36:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:56.894 06:36:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:56.894 06:36:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.894 06:36:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:56.894 06:36:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.894 06:36:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:56.894 06:36:15 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:57.153 Malloc0 00:09:57.411 06:36:15 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:57.669 Malloc1 00:09:57.669 06:36:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:57.669 06:36:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:57.669 06:36:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:57.669 06:36:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:57.669 06:36:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:57.669 06:36:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:57.669 06:36:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:57.669 06:36:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:57.669 06:36:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:57.669 06:36:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:57.669 06:36:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:57.669 06:36:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:57.669 06:36:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:57.669 06:36:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:57.669 06:36:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:57.669 06:36:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:57.927 /dev/nbd0 00:09:57.927 06:36:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:57.928 06:36:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:57.928 06:36:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:57.928 06:36:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:57.928 06:36:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:57.928 06:36:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:57.928 06:36:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:57.928 06:36:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:57.928 06:36:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:57.928 06:36:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:57.928 06:36:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:57.928 1+0 records in 00:09:57.928 1+0 records out 00:09:57.928 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314582 s, 13.0 MB/s 00:09:57.928 06:36:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:57.928 06:36:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:57.928 06:36:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:57.928 06:36:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:57.928 06:36:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:57.928 06:36:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:57.928 06:36:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:57.928 06:36:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:58.186 /dev/nbd1 00:09:58.186 06:36:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:58.186 06:36:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:58.186 06:36:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:58.186 06:36:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:58.186 06:36:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:58.186 06:36:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:58.186 06:36:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:58.186 06:36:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:58.186 06:36:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:58.186 06:36:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:58.186 06:36:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:58.186 1+0 records in 00:09:58.186 1+0 records out 00:09:58.186 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343634 s, 11.9 MB/s 00:09:58.186 06:36:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:58.186 06:36:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:58.186 06:36:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:58.186 06:36:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:58.186 06:36:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:58.186 06:36:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:58.186 06:36:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:58.186 06:36:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:58.186 06:36:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:58.186 06:36:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:58.755 { 00:09:58.755 "nbd_device": "/dev/nbd0", 00:09:58.755 "bdev_name": "Malloc0" 00:09:58.755 }, 00:09:58.755 { 00:09:58.755 "nbd_device": "/dev/nbd1", 00:09:58.755 "bdev_name": "Malloc1" 00:09:58.755 } 00:09:58.755 ]' 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:58.755 { 00:09:58.755 "nbd_device": "/dev/nbd0", 00:09:58.755 "bdev_name": "Malloc0" 00:09:58.755 }, 00:09:58.755 { 00:09:58.755 "nbd_device": "/dev/nbd1", 00:09:58.755 "bdev_name": "Malloc1" 00:09:58.755 } 00:09:58.755 ]' 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:58.755 /dev/nbd1' 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:58.755 /dev/nbd1' 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:58.755 256+0 records in 00:09:58.755 256+0 records out 00:09:58.755 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00759313 s, 138 MB/s 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:58.755 256+0 records in 00:09:58.755 256+0 records out 00:09:58.755 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255632 s, 41.0 MB/s 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:58.755 256+0 records in 00:09:58.755 256+0 records out 00:09:58.755 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0292555 s, 35.8 MB/s 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:58.755 06:36:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:59.015 06:36:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:59.015 06:36:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:59.015 06:36:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:59.015 06:36:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:59.015 06:36:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:59.015 06:36:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:59.015 06:36:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:59.015 06:36:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:59.015 06:36:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:59.015 06:36:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:59.273 06:36:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:59.273 06:36:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:59.273 06:36:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:59.274 06:36:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:59.274 06:36:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:59.274 06:36:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:59.274 06:36:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:59.274 06:36:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:59.274 06:36:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:59.274 06:36:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:59.274 06:36:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:59.532 06:36:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:59.532 06:36:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:59.532 06:36:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:59.792 06:36:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:59.792 06:36:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:59.792 06:36:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:59.792 06:36:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:59.792 06:36:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:59.792 06:36:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:59.792 06:36:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:59.792 06:36:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:59.792 06:36:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:59.792 06:36:18 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:00.051 06:36:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:01.448 [2024-12-06 06:36:19.762944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:01.448 [2024-12-06 06:36:19.888022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.448 [2024-12-06 06:36:19.888026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.448 [2024-12-06 06:36:20.086220] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:01.449 [2024-12-06 06:36:20.086351] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:03.356 spdk_app_start Round 2 00:10:03.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:03.356 06:36:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:03.356 06:36:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:03.356 06:36:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58460 /var/tmp/spdk-nbd.sock 00:10:03.356 06:36:21 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58460 ']' 00:10:03.356 06:36:21 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:03.356 06:36:21 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.356 06:36:21 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:03.356 06:36:21 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.356 06:36:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:03.356 06:36:21 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.356 06:36:21 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:03.356 06:36:21 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:03.923 Malloc0 00:10:03.923 06:36:22 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:04.181 Malloc1 00:10:04.181 06:36:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:04.181 06:36:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:04.181 06:36:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:04.181 06:36:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:04.181 06:36:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:04.181 06:36:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:04.181 06:36:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:04.181 06:36:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:04.181 06:36:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:04.181 06:36:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:04.181 06:36:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:04.181 06:36:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:04.181 06:36:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:04.181 06:36:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:04.181 06:36:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:04.181 06:36:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:04.438 /dev/nbd0 00:10:04.438 06:36:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:04.438 06:36:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:04.438 06:36:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:04.438 06:36:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:04.438 06:36:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:04.438 06:36:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:04.438 06:36:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:04.438 06:36:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:04.438 06:36:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:04.438 06:36:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:04.438 06:36:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:04.438 1+0 records in 00:10:04.438 1+0 records out 00:10:04.438 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325646 s, 12.6 MB/s 00:10:04.438 06:36:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:04.438 06:36:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:04.438 06:36:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:04.438 06:36:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:04.438 06:36:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:04.438 06:36:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:04.438 06:36:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:04.438 06:36:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:04.697 /dev/nbd1 00:10:04.697 06:36:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:04.697 06:36:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:04.697 06:36:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:04.697 06:36:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:04.697 06:36:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:04.697 06:36:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:04.697 06:36:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:04.697 06:36:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:04.697 06:36:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:04.697 06:36:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:04.697 06:36:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:04.697 1+0 records in 00:10:04.697 1+0 records out 00:10:04.697 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401007 s, 10.2 MB/s 00:10:04.697 06:36:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:04.697 06:36:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:04.697 06:36:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:04.697 06:36:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:04.697 06:36:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:04.697 06:36:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:04.697 06:36:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:04.697 06:36:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:04.697 06:36:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:04.697 06:36:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:04.955 06:36:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:04.955 { 00:10:04.955 "nbd_device": "/dev/nbd0", 00:10:04.955 "bdev_name": "Malloc0" 00:10:04.955 }, 00:10:04.955 { 00:10:04.955 "nbd_device": "/dev/nbd1", 00:10:04.955 "bdev_name": "Malloc1" 00:10:04.955 } 00:10:04.955 ]' 00:10:04.955 06:36:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:04.955 { 00:10:04.955 "nbd_device": "/dev/nbd0", 00:10:04.955 "bdev_name": "Malloc0" 00:10:04.955 }, 00:10:04.955 { 00:10:04.955 "nbd_device": "/dev/nbd1", 00:10:04.955 "bdev_name": "Malloc1" 00:10:04.955 } 00:10:04.955 ]' 00:10:04.955 06:36:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:05.213 /dev/nbd1' 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:05.213 /dev/nbd1' 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:05.213 256+0 records in 00:10:05.213 256+0 records out 00:10:05.213 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00967191 s, 108 MB/s 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:05.213 256+0 records in 00:10:05.213 256+0 records out 00:10:05.213 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245728 s, 42.7 MB/s 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:05.213 256+0 records in 00:10:05.213 256+0 records out 00:10:05.213 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0313828 s, 33.4 MB/s 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:05.213 06:36:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:05.471 06:36:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:05.471 06:36:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:05.471 06:36:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:05.471 06:36:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:05.471 06:36:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:05.471 06:36:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:05.471 06:36:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:05.471 06:36:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:05.471 06:36:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:05.471 06:36:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:05.729 06:36:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:05.729 06:36:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:05.729 06:36:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:05.729 06:36:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:05.729 06:36:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:05.729 06:36:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:05.729 06:36:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:05.729 06:36:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:05.729 06:36:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:05.729 06:36:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:05.729 06:36:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:06.302 06:36:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:06.302 06:36:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:06.302 06:36:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:06.302 06:36:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:06.302 06:36:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:06.302 06:36:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:06.302 06:36:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:06.302 06:36:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:06.302 06:36:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:06.302 06:36:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:06.302 06:36:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:06.302 06:36:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:06.302 06:36:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:06.560 06:36:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:07.934 [2024-12-06 06:36:26.280006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:07.934 [2024-12-06 06:36:26.410395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.934 [2024-12-06 06:36:26.410405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.192 [2024-12-06 06:36:26.607551] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:08.192 [2024-12-06 06:36:26.607714] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:09.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:09.569 06:36:28 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58460 /var/tmp/spdk-nbd.sock 00:10:09.569 06:36:28 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58460 ']' 00:10:09.569 06:36:28 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:09.569 06:36:28 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:09.569 06:36:28 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:09.569 06:36:28 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:09.569 06:36:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:10.137 06:36:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.137 06:36:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:10.137 06:36:28 event.app_repeat -- event/event.sh@39 -- # killprocess 58460 00:10:10.137 06:36:28 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58460 ']' 00:10:10.137 06:36:28 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58460 00:10:10.137 06:36:28 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:10:10.137 06:36:28 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.137 06:36:28 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58460 00:10:10.137 killing process with pid 58460 00:10:10.137 06:36:28 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.137 06:36:28 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.137 06:36:28 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58460' 00:10:10.137 06:36:28 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58460 00:10:10.137 06:36:28 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58460 00:10:11.073 spdk_app_start is called in Round 0. 00:10:11.073 Shutdown signal received, stop current app iteration 00:10:11.073 Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 reinitialization... 00:10:11.073 spdk_app_start is called in Round 1. 00:10:11.073 Shutdown signal received, stop current app iteration 00:10:11.073 Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 reinitialization... 00:10:11.073 spdk_app_start is called in Round 2. 00:10:11.073 Shutdown signal received, stop current app iteration 00:10:11.073 Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 reinitialization... 00:10:11.074 spdk_app_start is called in Round 3. 00:10:11.074 Shutdown signal received, stop current app iteration 00:10:11.074 06:36:29 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:11.074 06:36:29 event.app_repeat -- event/event.sh@42 -- # return 0 00:10:11.074 00:10:11.074 real 0m21.807s 00:10:11.074 user 0m48.420s 00:10:11.074 sys 0m3.015s 00:10:11.074 06:36:29 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.074 ************************************ 00:10:11.074 END TEST app_repeat 00:10:11.074 ************************************ 00:10:11.074 06:36:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:11.074 06:36:29 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:11.074 06:36:29 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:11.074 06:36:29 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:11.074 06:36:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.074 06:36:29 event -- common/autotest_common.sh@10 -- # set +x 00:10:11.074 ************************************ 00:10:11.074 START TEST cpu_locks 00:10:11.074 ************************************ 00:10:11.074 06:36:29 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:11.074 * Looking for test storage... 00:10:11.074 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:11.074 06:36:29 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:11.074 06:36:29 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:10:11.074 06:36:29 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:11.333 06:36:29 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:11.333 06:36:29 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.333 06:36:29 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.333 06:36:29 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.333 06:36:29 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.333 06:36:29 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.333 06:36:29 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.333 06:36:29 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.333 06:36:29 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.333 06:36:29 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.333 06:36:29 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.333 06:36:29 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.333 06:36:29 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:10:11.333 06:36:29 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:10:11.333 06:36:29 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.333 06:36:29 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.333 06:36:29 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:10:11.333 06:36:29 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:10:11.333 06:36:29 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.333 06:36:29 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:10:11.333 06:36:29 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.333 06:36:29 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:10:11.333 06:36:29 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:10:11.333 06:36:29 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.333 06:36:29 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:10:11.333 06:36:29 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.333 06:36:29 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.333 06:36:29 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.333 06:36:29 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:10:11.333 06:36:29 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.333 06:36:29 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:11.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.333 --rc genhtml_branch_coverage=1 00:10:11.333 --rc genhtml_function_coverage=1 00:10:11.333 --rc genhtml_legend=1 00:10:11.333 --rc geninfo_all_blocks=1 00:10:11.333 --rc geninfo_unexecuted_blocks=1 00:10:11.333 00:10:11.333 ' 00:10:11.333 06:36:29 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:11.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.333 --rc genhtml_branch_coverage=1 00:10:11.333 --rc genhtml_function_coverage=1 00:10:11.333 --rc genhtml_legend=1 00:10:11.333 --rc geninfo_all_blocks=1 00:10:11.333 --rc geninfo_unexecuted_blocks=1 00:10:11.333 00:10:11.333 ' 00:10:11.333 06:36:29 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:11.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.333 --rc genhtml_branch_coverage=1 00:10:11.333 --rc genhtml_function_coverage=1 00:10:11.333 --rc genhtml_legend=1 00:10:11.334 --rc geninfo_all_blocks=1 00:10:11.334 --rc geninfo_unexecuted_blocks=1 00:10:11.334 00:10:11.334 ' 00:10:11.334 06:36:29 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:11.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.334 --rc genhtml_branch_coverage=1 00:10:11.334 --rc genhtml_function_coverage=1 00:10:11.334 --rc genhtml_legend=1 00:10:11.334 --rc geninfo_all_blocks=1 00:10:11.334 --rc geninfo_unexecuted_blocks=1 00:10:11.334 00:10:11.334 ' 00:10:11.334 06:36:29 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:11.334 06:36:29 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:11.334 06:36:29 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:11.334 06:36:29 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:11.334 06:36:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:11.334 06:36:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.334 06:36:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:11.334 ************************************ 00:10:11.334 START TEST default_locks 00:10:11.334 ************************************ 00:10:11.334 06:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:10:11.334 06:36:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58936 00:10:11.334 06:36:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58936 00:10:11.334 06:36:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:11.334 06:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58936 ']' 00:10:11.334 06:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.334 06:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.334 06:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.334 06:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.334 06:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:11.334 [2024-12-06 06:36:29.950543] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:10:11.334 [2024-12-06 06:36:29.950727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58936 ] 00:10:11.593 [2024-12-06 06:36:30.136270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.852 [2024-12-06 06:36:30.268494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.794 06:36:31 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.794 06:36:31 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:10:12.794 06:36:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58936 00:10:12.794 06:36:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58936 00:10:12.794 06:36:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:13.053 06:36:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58936 00:10:13.053 06:36:31 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58936 ']' 00:10:13.053 06:36:31 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58936 00:10:13.053 06:36:31 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:10:13.053 06:36:31 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.053 06:36:31 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58936 00:10:13.053 06:36:31 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:13.053 06:36:31 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:13.053 killing process with pid 58936 00:10:13.053 06:36:31 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58936' 00:10:13.053 06:36:31 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58936 00:10:13.053 06:36:31 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58936 00:10:15.669 06:36:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58936 00:10:15.669 06:36:33 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:10:15.669 06:36:33 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58936 00:10:15.669 06:36:33 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:15.669 06:36:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:15.669 06:36:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:15.669 06:36:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:15.669 06:36:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58936 00:10:15.669 06:36:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58936 ']' 00:10:15.669 06:36:33 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.669 06:36:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.669 06:36:33 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.669 06:36:33 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.669 06:36:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:15.669 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58936) - No such process 00:10:15.669 ERROR: process (pid: 58936) is no longer running 00:10:15.669 06:36:33 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.669 06:36:33 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:10:15.669 06:36:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:10:15.669 06:36:33 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:15.669 06:36:33 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:15.669 06:36:33 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:15.669 06:36:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:10:15.669 06:36:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:15.669 06:36:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:10:15.669 06:36:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:15.669 00:10:15.669 real 0m4.073s 00:10:15.669 user 0m4.070s 00:10:15.669 sys 0m0.768s 00:10:15.669 06:36:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.669 06:36:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:15.669 ************************************ 00:10:15.669 END TEST default_locks 00:10:15.669 ************************************ 00:10:15.669 06:36:33 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:15.669 06:36:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:15.669 06:36:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.669 06:36:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:15.669 ************************************ 00:10:15.669 START TEST default_locks_via_rpc 00:10:15.669 ************************************ 00:10:15.669 06:36:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:10:15.669 06:36:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59017 00:10:15.669 06:36:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:15.669 06:36:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59017 00:10:15.669 06:36:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59017 ']' 00:10:15.669 06:36:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.669 06:36:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.669 06:36:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.669 06:36:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.669 06:36:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.669 [2024-12-06 06:36:34.065837] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:10:15.669 [2024-12-06 06:36:34.066000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59017 ] 00:10:15.669 [2024-12-06 06:36:34.254346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.927 [2024-12-06 06:36:34.408837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.861 06:36:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.861 06:36:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:16.861 06:36:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:16.861 06:36:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.861 06:36:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.861 06:36:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.861 06:36:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:10:16.861 06:36:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:16.861 06:36:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:10:16.861 06:36:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:16.861 06:36:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:16.861 06:36:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.861 06:36:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.861 06:36:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.861 06:36:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59017 00:10:16.861 06:36:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59017 00:10:16.861 06:36:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:17.119 06:36:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59017 00:10:17.119 06:36:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59017 ']' 00:10:17.119 06:36:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59017 00:10:17.119 06:36:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:10:17.119 06:36:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.119 06:36:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59017 00:10:17.119 06:36:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.119 06:36:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.119 killing process with pid 59017 00:10:17.119 06:36:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59017' 00:10:17.119 06:36:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59017 00:10:17.119 06:36:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59017 00:10:19.652 00:10:19.652 real 0m4.049s 00:10:19.652 user 0m4.076s 00:10:19.652 sys 0m0.709s 00:10:19.652 06:36:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.652 06:36:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.652 ************************************ 00:10:19.652 END TEST default_locks_via_rpc 00:10:19.652 ************************************ 00:10:19.652 06:36:38 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:19.652 06:36:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:19.652 06:36:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.652 06:36:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:19.652 ************************************ 00:10:19.652 START TEST non_locking_app_on_locked_coremask 00:10:19.652 ************************************ 00:10:19.652 06:36:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:10:19.652 06:36:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59091 00:10:19.652 06:36:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:19.652 06:36:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59091 /var/tmp/spdk.sock 00:10:19.652 06:36:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59091 ']' 00:10:19.652 06:36:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.652 06:36:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.652 06:36:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.652 06:36:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.652 06:36:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:19.652 [2024-12-06 06:36:38.144019] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:10:19.652 [2024-12-06 06:36:38.144169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59091 ] 00:10:19.910 [2024-12-06 06:36:38.319149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.910 [2024-12-06 06:36:38.449971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.842 06:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.842 06:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:20.842 06:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59112 00:10:20.842 06:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:20.842 06:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59112 /var/tmp/spdk2.sock 00:10:20.842 06:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59112 ']' 00:10:20.842 06:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:20.842 06:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:20.842 06:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:20.842 06:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.842 06:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:20.842 [2024-12-06 06:36:39.442416] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:10:20.842 [2024-12-06 06:36:39.442597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59112 ] 00:10:21.100 [2024-12-06 06:36:39.640626] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:21.100 [2024-12-06 06:36:39.640705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.358 [2024-12-06 06:36:39.911103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.891 06:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:23.891 06:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:23.891 06:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59091 00:10:23.891 06:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59091 00:10:23.891 06:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:24.458 06:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59091 00:10:24.458 06:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59091 ']' 00:10:24.458 06:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59091 00:10:24.458 06:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:24.458 06:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:24.458 06:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59091 00:10:24.458 06:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:24.458 06:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:24.458 killing process with pid 59091 00:10:24.458 06:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59091' 00:10:24.458 06:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59091 00:10:24.458 06:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59091 00:10:29.722 06:36:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59112 00:10:29.722 06:36:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59112 ']' 00:10:29.722 06:36:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59112 00:10:29.722 06:36:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:29.722 06:36:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:29.722 06:36:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59112 00:10:29.722 killing process with pid 59112 00:10:29.722 06:36:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:29.722 06:36:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:29.722 06:36:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59112' 00:10:29.722 06:36:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59112 00:10:29.722 06:36:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59112 00:10:31.621 ************************************ 00:10:31.621 END TEST non_locking_app_on_locked_coremask 00:10:31.621 ************************************ 00:10:31.621 00:10:31.621 real 0m11.809s 00:10:31.621 user 0m12.331s 00:10:31.621 sys 0m1.512s 00:10:31.621 06:36:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.621 06:36:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:31.621 06:36:49 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:31.621 06:36:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:31.621 06:36:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.621 06:36:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:31.621 ************************************ 00:10:31.621 START TEST locking_app_on_unlocked_coremask 00:10:31.621 ************************************ 00:10:31.621 06:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:10:31.621 06:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59260 00:10:31.621 06:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:31.622 06:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59260 /var/tmp/spdk.sock 00:10:31.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.622 06:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59260 ']' 00:10:31.622 06:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.622 06:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:31.622 06:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.622 06:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:31.622 06:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:31.622 [2024-12-06 06:36:50.006738] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:10:31.622 [2024-12-06 06:36:50.006902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59260 ] 00:10:31.622 [2024-12-06 06:36:50.182768] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:31.622 [2024-12-06 06:36:50.182827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.882 [2024-12-06 06:36:50.317322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.817 06:36:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:32.817 06:36:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:32.817 06:36:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59276 00:10:32.817 06:36:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:32.817 06:36:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59276 /var/tmp/spdk2.sock 00:10:32.817 06:36:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59276 ']' 00:10:32.817 06:36:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:32.817 06:36:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:32.817 06:36:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:32.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:32.817 06:36:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:32.817 06:36:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:32.817 [2024-12-06 06:36:51.332988] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:10:32.817 [2024-12-06 06:36:51.333422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59276 ] 00:10:33.075 [2024-12-06 06:36:51.528159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.334 [2024-12-06 06:36:51.793041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.889 06:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:35.889 06:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:35.889 06:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59276 00:10:35.889 06:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59276 00:10:35.889 06:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:36.456 06:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59260 00:10:36.456 06:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59260 ']' 00:10:36.456 06:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59260 00:10:36.456 06:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:36.456 06:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.456 06:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59260 00:10:36.456 06:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:36.456 06:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:36.456 killing process with pid 59260 00:10:36.456 06:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59260' 00:10:36.456 06:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59260 00:10:36.456 06:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59260 00:10:41.724 06:36:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59276 00:10:41.724 06:36:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59276 ']' 00:10:41.724 06:36:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59276 00:10:41.724 06:36:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:41.724 06:36:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.724 06:36:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59276 00:10:41.724 killing process with pid 59276 00:10:41.724 06:36:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.724 06:36:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.724 06:36:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59276' 00:10:41.724 06:36:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59276 00:10:41.724 06:36:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59276 00:10:43.629 ************************************ 00:10:43.629 END TEST locking_app_on_unlocked_coremask 00:10:43.629 ************************************ 00:10:43.629 00:10:43.629 real 0m12.210s 00:10:43.629 user 0m12.722s 00:10:43.629 sys 0m1.558s 00:10:43.629 06:37:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.629 06:37:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:43.629 06:37:02 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:43.629 06:37:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:43.629 06:37:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.629 06:37:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:43.629 ************************************ 00:10:43.629 START TEST locking_app_on_locked_coremask 00:10:43.629 ************************************ 00:10:43.629 06:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:10:43.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.629 06:37:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59431 00:10:43.629 06:37:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59431 /var/tmp/spdk.sock 00:10:43.629 06:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59431 ']' 00:10:43.629 06:37:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:43.629 06:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.629 06:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.629 06:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.629 06:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.629 06:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:43.888 [2024-12-06 06:37:02.303082] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:10:43.889 [2024-12-06 06:37:02.303291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59431 ] 00:10:43.889 [2024-12-06 06:37:02.484944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.147 [2024-12-06 06:37:02.623830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.085 06:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.085 06:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:45.085 06:37:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59452 00:10:45.085 06:37:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:45.085 06:37:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59452 /var/tmp/spdk2.sock 00:10:45.085 06:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:10:45.085 06:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59452 /var/tmp/spdk2.sock 00:10:45.085 06:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:45.085 06:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:45.085 06:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:45.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:45.085 06:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:45.085 06:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59452 /var/tmp/spdk2.sock 00:10:45.085 06:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59452 ']' 00:10:45.085 06:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:45.085 06:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.085 06:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:45.085 06:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.085 06:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:45.085 [2024-12-06 06:37:03.709037] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:10:45.085 [2024-12-06 06:37:03.709276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59452 ] 00:10:45.344 [2024-12-06 06:37:03.918185] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59431 has claimed it. 00:10:45.344 [2024-12-06 06:37:03.918276] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:45.911 ERROR: process (pid: 59452) is no longer running 00:10:45.911 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59452) - No such process 00:10:45.911 06:37:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.911 06:37:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:10:45.911 06:37:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:10:45.911 06:37:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:45.911 06:37:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:45.911 06:37:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:45.911 06:37:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59431 00:10:45.911 06:37:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59431 00:10:45.911 06:37:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:46.170 06:37:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59431 00:10:46.170 06:37:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59431 ']' 00:10:46.170 06:37:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59431 00:10:46.170 06:37:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:46.170 06:37:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.170 06:37:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59431 00:10:46.429 killing process with pid 59431 00:10:46.429 06:37:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:46.429 06:37:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:46.429 06:37:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59431' 00:10:46.429 06:37:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59431 00:10:46.429 06:37:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59431 00:10:48.960 ************************************ 00:10:48.960 END TEST locking_app_on_locked_coremask 00:10:48.960 ************************************ 00:10:48.960 00:10:48.960 real 0m4.986s 00:10:48.960 user 0m5.298s 00:10:48.960 sys 0m0.967s 00:10:48.960 06:37:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.960 06:37:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:48.960 06:37:07 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:48.960 06:37:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:48.960 06:37:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.960 06:37:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:48.960 ************************************ 00:10:48.960 START TEST locking_overlapped_coremask 00:10:48.960 ************************************ 00:10:48.960 06:37:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:10:48.960 06:37:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59522 00:10:48.960 06:37:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59522 /var/tmp/spdk.sock 00:10:48.960 06:37:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:48.960 06:37:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59522 ']' 00:10:48.960 06:37:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.960 06:37:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:48.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.960 06:37:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.960 06:37:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:48.960 06:37:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:48.960 [2024-12-06 06:37:07.342828] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:10:48.960 [2024-12-06 06:37:07.343011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59522 ] 00:10:48.960 [2024-12-06 06:37:07.528863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:49.217 [2024-12-06 06:37:07.669309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.217 [2024-12-06 06:37:07.669464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.217 [2024-12-06 06:37:07.669497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.148 06:37:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.148 06:37:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:50.148 06:37:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59540 00:10:50.148 06:37:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:50.148 06:37:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59540 /var/tmp/spdk2.sock 00:10:50.148 06:37:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:10:50.148 06:37:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59540 /var/tmp/spdk2.sock 00:10:50.148 06:37:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:50.148 06:37:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.148 06:37:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:50.148 06:37:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.148 06:37:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59540 /var/tmp/spdk2.sock 00:10:50.148 06:37:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59540 ']' 00:10:50.148 06:37:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:50.148 06:37:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.148 06:37:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:50.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:50.148 06:37:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.148 06:37:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:50.148 [2024-12-06 06:37:08.688592] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:10:50.148 [2024-12-06 06:37:08.688931] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59540 ] 00:10:50.406 [2024-12-06 06:37:08.898627] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59522 has claimed it. 00:10:50.406 [2024-12-06 06:37:08.898721] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:50.974 ERROR: process (pid: 59540) is no longer running 00:10:50.974 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59540) - No such process 00:10:50.974 06:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.974 06:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:10:50.974 06:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:10:50.974 06:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:50.974 06:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:50.974 06:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:50.974 06:37:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:50.974 06:37:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:50.974 06:37:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:50.974 06:37:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:50.974 06:37:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59522 00:10:50.974 06:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59522 ']' 00:10:50.974 06:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59522 00:10:50.974 06:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:10:50.974 06:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:50.974 06:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59522 00:10:50.974 06:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:50.974 06:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:50.974 06:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59522' 00:10:50.974 killing process with pid 59522 00:10:50.974 06:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59522 00:10:50.974 06:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59522 00:10:53.511 00:10:53.511 real 0m4.491s 00:10:53.511 user 0m12.124s 00:10:53.511 sys 0m0.756s 00:10:53.511 06:37:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.511 06:37:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:53.511 ************************************ 00:10:53.511 END TEST locking_overlapped_coremask 00:10:53.511 ************************************ 00:10:53.511 06:37:11 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:53.511 06:37:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:53.511 06:37:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.511 06:37:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:53.511 ************************************ 00:10:53.511 START TEST locking_overlapped_coremask_via_rpc 00:10:53.511 ************************************ 00:10:53.511 06:37:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:10:53.511 06:37:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59606 00:10:53.511 06:37:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59606 /var/tmp/spdk.sock 00:10:53.511 06:37:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:53.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.511 06:37:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59606 ']' 00:10:53.511 06:37:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.511 06:37:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.511 06:37:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.511 06:37:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.511 06:37:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.511 [2024-12-06 06:37:11.887140] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:10:53.511 [2024-12-06 06:37:11.887593] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59606 ] 00:10:53.511 [2024-12-06 06:37:12.080930] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:53.511 [2024-12-06 06:37:12.080999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:53.769 [2024-12-06 06:37:12.219938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.769 [2024-12-06 06:37:12.220066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.769 [2024-12-06 06:37:12.220076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.719 06:37:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:54.719 06:37:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:54.719 06:37:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59633 00:10:54.719 06:37:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:54.719 06:37:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59633 /var/tmp/spdk2.sock 00:10:54.719 06:37:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59633 ']' 00:10:54.719 06:37:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:54.719 06:37:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:54.719 06:37:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:54.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:54.719 06:37:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:54.719 06:37:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.719 [2024-12-06 06:37:13.279002] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:10:54.719 [2024-12-06 06:37:13.279201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59633 ] 00:10:54.977 [2024-12-06 06:37:13.485325] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:54.977 [2024-12-06 06:37:13.485389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:55.236 [2024-12-06 06:37:13.756561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:55.236 [2024-12-06 06:37:13.759660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.236 [2024-12-06 06:37:13.759702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.868 [2024-12-06 06:37:16.038732] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59606 has claimed it. 00:10:57.868 request: 00:10:57.868 { 00:10:57.868 "method": "framework_enable_cpumask_locks", 00:10:57.868 "req_id": 1 00:10:57.868 } 00:10:57.868 Got JSON-RPC error response 00:10:57.868 response: 00:10:57.868 { 00:10:57.868 "code": -32603, 00:10:57.868 "message": "Failed to claim CPU core: 2" 00:10:57.868 } 00:10:57.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59606 /var/tmp/spdk.sock 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59606 ']' 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59633 /var/tmp/spdk2.sock 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59633 ']' 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.868 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.126 ************************************ 00:10:58.126 END TEST locking_overlapped_coremask_via_rpc 00:10:58.126 ************************************ 00:10:58.126 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.126 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:58.126 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:58.126 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:58.127 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:58.127 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:58.127 00:10:58.127 real 0m4.881s 00:10:58.127 user 0m1.749s 00:10:58.127 sys 0m0.249s 00:10:58.127 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.127 06:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.127 06:37:16 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:10:58.127 06:37:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59606 ]] 00:10:58.127 06:37:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59606 00:10:58.127 06:37:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59606 ']' 00:10:58.127 06:37:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59606 00:10:58.127 06:37:16 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:58.127 06:37:16 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:58.127 06:37:16 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59606 00:10:58.127 killing process with pid 59606 00:10:58.127 06:37:16 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:58.127 06:37:16 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:58.127 06:37:16 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59606' 00:10:58.127 06:37:16 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59606 00:10:58.127 06:37:16 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59606 00:11:00.656 06:37:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59633 ]] 00:11:00.656 06:37:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59633 00:11:00.656 06:37:19 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59633 ']' 00:11:00.656 06:37:19 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59633 00:11:00.656 06:37:19 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:11:00.656 06:37:19 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.656 06:37:19 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59633 00:11:00.656 killing process with pid 59633 00:11:00.656 06:37:19 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:11:00.656 06:37:19 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:11:00.656 06:37:19 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59633' 00:11:00.656 06:37:19 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59633 00:11:00.656 06:37:19 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59633 00:11:03.184 06:37:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:03.184 06:37:21 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:11:03.184 06:37:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59606 ]] 00:11:03.184 06:37:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59606 00:11:03.184 06:37:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59606 ']' 00:11:03.184 Process with pid 59606 is not found 00:11:03.184 06:37:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59606 00:11:03.184 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59606) - No such process 00:11:03.184 06:37:21 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59606 is not found' 00:11:03.184 06:37:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59633 ]] 00:11:03.184 06:37:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59633 00:11:03.184 06:37:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59633 ']' 00:11:03.184 06:37:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59633 00:11:03.184 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59633) - No such process 00:11:03.184 Process with pid 59633 is not found 00:11:03.184 06:37:21 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59633 is not found' 00:11:03.184 06:37:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:03.184 00:11:03.184 real 0m51.750s 00:11:03.184 user 1m29.035s 00:11:03.184 sys 0m7.779s 00:11:03.184 06:37:21 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.184 06:37:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:03.184 ************************************ 00:11:03.184 END TEST cpu_locks 00:11:03.184 ************************************ 00:11:03.184 ************************************ 00:11:03.184 END TEST event 00:11:03.184 ************************************ 00:11:03.184 00:11:03.184 real 1m25.265s 00:11:03.184 user 2m37.614s 00:11:03.184 sys 0m11.918s 00:11:03.184 06:37:21 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.184 06:37:21 event -- common/autotest_common.sh@10 -- # set +x 00:11:03.184 06:37:21 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:03.184 06:37:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:03.184 06:37:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.184 06:37:21 -- common/autotest_common.sh@10 -- # set +x 00:11:03.184 ************************************ 00:11:03.184 START TEST thread 00:11:03.184 ************************************ 00:11:03.184 06:37:21 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:03.184 * Looking for test storage... 00:11:03.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:11:03.184 06:37:21 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:03.184 06:37:21 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:11:03.184 06:37:21 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:03.184 06:37:21 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:03.184 06:37:21 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:03.184 06:37:21 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:03.184 06:37:21 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:03.184 06:37:21 thread -- scripts/common.sh@336 -- # IFS=.-: 00:11:03.184 06:37:21 thread -- scripts/common.sh@336 -- # read -ra ver1 00:11:03.184 06:37:21 thread -- scripts/common.sh@337 -- # IFS=.-: 00:11:03.184 06:37:21 thread -- scripts/common.sh@337 -- # read -ra ver2 00:11:03.184 06:37:21 thread -- scripts/common.sh@338 -- # local 'op=<' 00:11:03.184 06:37:21 thread -- scripts/common.sh@340 -- # ver1_l=2 00:11:03.184 06:37:21 thread -- scripts/common.sh@341 -- # ver2_l=1 00:11:03.184 06:37:21 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:03.184 06:37:21 thread -- scripts/common.sh@344 -- # case "$op" in 00:11:03.184 06:37:21 thread -- scripts/common.sh@345 -- # : 1 00:11:03.184 06:37:21 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:03.184 06:37:21 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:03.184 06:37:21 thread -- scripts/common.sh@365 -- # decimal 1 00:11:03.184 06:37:21 thread -- scripts/common.sh@353 -- # local d=1 00:11:03.184 06:37:21 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:03.184 06:37:21 thread -- scripts/common.sh@355 -- # echo 1 00:11:03.184 06:37:21 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:11:03.184 06:37:21 thread -- scripts/common.sh@366 -- # decimal 2 00:11:03.184 06:37:21 thread -- scripts/common.sh@353 -- # local d=2 00:11:03.184 06:37:21 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:03.184 06:37:21 thread -- scripts/common.sh@355 -- # echo 2 00:11:03.184 06:37:21 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:11:03.184 06:37:21 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:03.184 06:37:21 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:03.184 06:37:21 thread -- scripts/common.sh@368 -- # return 0 00:11:03.184 06:37:21 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:03.184 06:37:21 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:03.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.184 --rc genhtml_branch_coverage=1 00:11:03.184 --rc genhtml_function_coverage=1 00:11:03.184 --rc genhtml_legend=1 00:11:03.184 --rc geninfo_all_blocks=1 00:11:03.184 --rc geninfo_unexecuted_blocks=1 00:11:03.184 00:11:03.184 ' 00:11:03.184 06:37:21 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:03.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.184 --rc genhtml_branch_coverage=1 00:11:03.184 --rc genhtml_function_coverage=1 00:11:03.184 --rc genhtml_legend=1 00:11:03.184 --rc geninfo_all_blocks=1 00:11:03.184 --rc geninfo_unexecuted_blocks=1 00:11:03.184 00:11:03.184 ' 00:11:03.184 06:37:21 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:03.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.184 --rc genhtml_branch_coverage=1 00:11:03.184 --rc genhtml_function_coverage=1 00:11:03.184 --rc genhtml_legend=1 00:11:03.184 --rc geninfo_all_blocks=1 00:11:03.184 --rc geninfo_unexecuted_blocks=1 00:11:03.184 00:11:03.184 ' 00:11:03.184 06:37:21 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:03.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.184 --rc genhtml_branch_coverage=1 00:11:03.184 --rc genhtml_function_coverage=1 00:11:03.184 --rc genhtml_legend=1 00:11:03.184 --rc geninfo_all_blocks=1 00:11:03.184 --rc geninfo_unexecuted_blocks=1 00:11:03.184 00:11:03.184 ' 00:11:03.184 06:37:21 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:03.184 06:37:21 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:11:03.184 06:37:21 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.184 06:37:21 thread -- common/autotest_common.sh@10 -- # set +x 00:11:03.184 ************************************ 00:11:03.184 START TEST thread_poller_perf 00:11:03.184 ************************************ 00:11:03.184 06:37:21 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:03.184 [2024-12-06 06:37:21.728565] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:11:03.184 [2024-12-06 06:37:21.728975] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59831 ] 00:11:03.442 [2024-12-06 06:37:21.921557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.442 [2024-12-06 06:37:22.085899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.442 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:04.818 [2024-12-06T06:37:23.465Z] ====================================== 00:11:04.818 [2024-12-06T06:37:23.465Z] busy:2216494480 (cyc) 00:11:04.818 [2024-12-06T06:37:23.465Z] total_run_count: 289000 00:11:04.818 [2024-12-06T06:37:23.465Z] tsc_hz: 2200000000 (cyc) 00:11:04.818 [2024-12-06T06:37:23.465Z] ====================================== 00:11:04.818 [2024-12-06T06:37:23.465Z] poller_cost: 7669 (cyc), 3485 (nsec) 00:11:04.818 00:11:04.818 ************************************ 00:11:04.818 END TEST thread_poller_perf 00:11:04.818 ************************************ 00:11:04.818 real 0m1.666s 00:11:04.818 user 0m1.435s 00:11:04.818 sys 0m0.118s 00:11:04.818 06:37:23 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.818 06:37:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:04.818 06:37:23 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:04.818 06:37:23 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:11:04.818 06:37:23 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.818 06:37:23 thread -- common/autotest_common.sh@10 -- # set +x 00:11:04.818 ************************************ 00:11:04.818 START TEST thread_poller_perf 00:11:04.818 ************************************ 00:11:04.818 06:37:23 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:04.819 [2024-12-06 06:37:23.452718] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:11:04.819 [2024-12-06 06:37:23.452908] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59872 ] 00:11:05.077 [2024-12-06 06:37:23.640751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.335 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:11:05.335 [2024-12-06 06:37:23.779066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.716 [2024-12-06T06:37:25.363Z] ====================================== 00:11:06.716 [2024-12-06T06:37:25.363Z] busy:2204350717 (cyc) 00:11:06.716 [2024-12-06T06:37:25.363Z] total_run_count: 3394000 00:11:06.716 [2024-12-06T06:37:25.363Z] tsc_hz: 2200000000 (cyc) 00:11:06.716 [2024-12-06T06:37:25.363Z] ====================================== 00:11:06.716 [2024-12-06T06:37:25.363Z] poller_cost: 649 (cyc), 295 (nsec) 00:11:06.716 00:11:06.716 real 0m1.626s 00:11:06.716 user 0m1.399s 00:11:06.716 sys 0m0.116s 00:11:06.716 06:37:25 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.716 ************************************ 00:11:06.716 END TEST thread_poller_perf 00:11:06.716 ************************************ 00:11:06.716 06:37:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:06.716 06:37:25 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:11:06.716 ************************************ 00:11:06.716 END TEST thread 00:11:06.716 ************************************ 00:11:06.716 00:11:06.716 real 0m3.607s 00:11:06.716 user 0m2.989s 00:11:06.716 sys 0m0.390s 00:11:06.716 06:37:25 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.716 06:37:25 thread -- common/autotest_common.sh@10 -- # set +x 00:11:06.716 06:37:25 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:11:06.716 06:37:25 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:06.716 06:37:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:06.716 06:37:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.716 06:37:25 -- common/autotest_common.sh@10 -- # set +x 00:11:06.716 ************************************ 00:11:06.716 START TEST app_cmdline 00:11:06.716 ************************************ 00:11:06.716 06:37:25 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:06.716 * Looking for test storage... 00:11:06.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:06.716 06:37:25 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:06.716 06:37:25 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:11:06.716 06:37:25 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:06.716 06:37:25 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:06.716 06:37:25 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:06.716 06:37:25 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:06.716 06:37:25 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:06.716 06:37:25 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:11:06.716 06:37:25 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:11:06.716 06:37:25 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:11:06.716 06:37:25 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:11:06.716 06:37:25 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:11:06.716 06:37:25 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:11:06.716 06:37:25 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:11:06.716 06:37:25 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:06.716 06:37:25 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:11:06.716 06:37:25 app_cmdline -- scripts/common.sh@345 -- # : 1 00:11:06.716 06:37:25 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:06.716 06:37:25 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:06.716 06:37:25 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:11:06.716 06:37:25 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:11:06.716 06:37:25 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:06.716 06:37:25 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:11:06.716 06:37:25 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:11:06.716 06:37:25 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:11:06.716 06:37:25 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:11:06.716 06:37:25 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:06.716 06:37:25 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:11:06.716 06:37:25 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:11:06.716 06:37:25 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:06.716 06:37:25 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:06.716 06:37:25 app_cmdline -- scripts/common.sh@368 -- # return 0 00:11:06.716 06:37:25 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:06.716 06:37:25 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:06.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.716 --rc genhtml_branch_coverage=1 00:11:06.716 --rc genhtml_function_coverage=1 00:11:06.716 --rc genhtml_legend=1 00:11:06.716 --rc geninfo_all_blocks=1 00:11:06.716 --rc geninfo_unexecuted_blocks=1 00:11:06.716 00:11:06.716 ' 00:11:06.716 06:37:25 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:06.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.716 --rc genhtml_branch_coverage=1 00:11:06.716 --rc genhtml_function_coverage=1 00:11:06.716 --rc genhtml_legend=1 00:11:06.716 --rc geninfo_all_blocks=1 00:11:06.716 --rc geninfo_unexecuted_blocks=1 00:11:06.716 00:11:06.716 ' 00:11:06.716 06:37:25 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:06.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.716 --rc genhtml_branch_coverage=1 00:11:06.716 --rc genhtml_function_coverage=1 00:11:06.716 --rc genhtml_legend=1 00:11:06.716 --rc geninfo_all_blocks=1 00:11:06.716 --rc geninfo_unexecuted_blocks=1 00:11:06.716 00:11:06.716 ' 00:11:06.716 06:37:25 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:06.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.716 --rc genhtml_branch_coverage=1 00:11:06.716 --rc genhtml_function_coverage=1 00:11:06.716 --rc genhtml_legend=1 00:11:06.716 --rc geninfo_all_blocks=1 00:11:06.716 --rc geninfo_unexecuted_blocks=1 00:11:06.716 00:11:06.716 ' 00:11:06.716 06:37:25 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:06.716 06:37:25 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59957 00:11:06.716 06:37:25 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:06.716 06:37:25 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59957 00:11:06.716 06:37:25 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59957 ']' 00:11:06.716 06:37:25 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.716 06:37:25 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:06.716 06:37:25 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.716 06:37:25 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:06.716 06:37:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:06.974 [2024-12-06 06:37:25.430454] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:11:06.974 [2024-12-06 06:37:25.431589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59957 ] 00:11:06.974 [2024-12-06 06:37:25.612356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.232 [2024-12-06 06:37:25.751046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.169 06:37:26 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.169 06:37:26 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:11:08.169 06:37:26 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:11:08.427 { 00:11:08.427 "version": "SPDK v25.01-pre git sha1 20bebc997", 00:11:08.427 "fields": { 00:11:08.427 "major": 25, 00:11:08.427 "minor": 1, 00:11:08.427 "patch": 0, 00:11:08.427 "suffix": "-pre", 00:11:08.427 "commit": "20bebc997" 00:11:08.427 } 00:11:08.427 } 00:11:08.427 06:37:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:11:08.427 06:37:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:08.427 06:37:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:08.427 06:37:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:08.427 06:37:26 app_cmdline -- app/cmdline.sh@26 -- # sort 00:11:08.427 06:37:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:08.427 06:37:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:08.427 06:37:26 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.427 06:37:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:08.427 06:37:26 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.427 06:37:26 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:08.427 06:37:26 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:08.427 06:37:26 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:08.427 06:37:26 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:11:08.427 06:37:26 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:08.427 06:37:26 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:08.427 06:37:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:08.427 06:37:26 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:08.427 06:37:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:08.427 06:37:26 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:08.427 06:37:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:08.427 06:37:26 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:08.427 06:37:26 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:08.427 06:37:26 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:08.685 request: 00:11:08.685 { 00:11:08.685 "method": "env_dpdk_get_mem_stats", 00:11:08.685 "req_id": 1 00:11:08.685 } 00:11:08.685 Got JSON-RPC error response 00:11:08.685 response: 00:11:08.685 { 00:11:08.685 "code": -32601, 00:11:08.685 "message": "Method not found" 00:11:08.685 } 00:11:08.685 06:37:27 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:11:08.685 06:37:27 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:08.685 06:37:27 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:08.685 06:37:27 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:08.685 06:37:27 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59957 00:11:08.685 06:37:27 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59957 ']' 00:11:08.685 06:37:27 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59957 00:11:08.685 06:37:27 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:11:08.685 06:37:27 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:08.685 06:37:27 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59957 00:11:08.944 killing process with pid 59957 00:11:08.944 06:37:27 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:08.944 06:37:27 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:08.944 06:37:27 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59957' 00:11:08.944 06:37:27 app_cmdline -- common/autotest_common.sh@973 -- # kill 59957 00:11:08.944 06:37:27 app_cmdline -- common/autotest_common.sh@978 -- # wait 59957 00:11:11.600 ************************************ 00:11:11.600 END TEST app_cmdline 00:11:11.600 ************************************ 00:11:11.600 00:11:11.600 real 0m4.635s 00:11:11.600 user 0m5.081s 00:11:11.600 sys 0m0.686s 00:11:11.600 06:37:29 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.600 06:37:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:11.600 06:37:29 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:11.600 06:37:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:11.600 06:37:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.600 06:37:29 -- common/autotest_common.sh@10 -- # set +x 00:11:11.600 ************************************ 00:11:11.600 START TEST version 00:11:11.600 ************************************ 00:11:11.600 06:37:29 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:11.600 * Looking for test storage... 00:11:11.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:11.600 06:37:29 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:11.600 06:37:29 version -- common/autotest_common.sh@1711 -- # lcov --version 00:11:11.600 06:37:29 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:11.600 06:37:29 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:11.600 06:37:29 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:11.600 06:37:29 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:11.600 06:37:29 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:11.600 06:37:29 version -- scripts/common.sh@336 -- # IFS=.-: 00:11:11.600 06:37:29 version -- scripts/common.sh@336 -- # read -ra ver1 00:11:11.600 06:37:29 version -- scripts/common.sh@337 -- # IFS=.-: 00:11:11.600 06:37:29 version -- scripts/common.sh@337 -- # read -ra ver2 00:11:11.600 06:37:29 version -- scripts/common.sh@338 -- # local 'op=<' 00:11:11.600 06:37:29 version -- scripts/common.sh@340 -- # ver1_l=2 00:11:11.600 06:37:29 version -- scripts/common.sh@341 -- # ver2_l=1 00:11:11.600 06:37:29 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:11.600 06:37:29 version -- scripts/common.sh@344 -- # case "$op" in 00:11:11.600 06:37:29 version -- scripts/common.sh@345 -- # : 1 00:11:11.600 06:37:29 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:11.600 06:37:29 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:11.600 06:37:29 version -- scripts/common.sh@365 -- # decimal 1 00:11:11.600 06:37:29 version -- scripts/common.sh@353 -- # local d=1 00:11:11.600 06:37:29 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:11.600 06:37:29 version -- scripts/common.sh@355 -- # echo 1 00:11:11.600 06:37:29 version -- scripts/common.sh@365 -- # ver1[v]=1 00:11:11.600 06:37:29 version -- scripts/common.sh@366 -- # decimal 2 00:11:11.600 06:37:29 version -- scripts/common.sh@353 -- # local d=2 00:11:11.600 06:37:29 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:11.600 06:37:29 version -- scripts/common.sh@355 -- # echo 2 00:11:11.600 06:37:29 version -- scripts/common.sh@366 -- # ver2[v]=2 00:11:11.600 06:37:30 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:11.600 06:37:30 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:11.600 06:37:30 version -- scripts/common.sh@368 -- # return 0 00:11:11.600 06:37:30 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:11.600 06:37:30 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:11.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.600 --rc genhtml_branch_coverage=1 00:11:11.600 --rc genhtml_function_coverage=1 00:11:11.600 --rc genhtml_legend=1 00:11:11.600 --rc geninfo_all_blocks=1 00:11:11.600 --rc geninfo_unexecuted_blocks=1 00:11:11.600 00:11:11.600 ' 00:11:11.600 06:37:30 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:11.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.600 --rc genhtml_branch_coverage=1 00:11:11.600 --rc genhtml_function_coverage=1 00:11:11.600 --rc genhtml_legend=1 00:11:11.600 --rc geninfo_all_blocks=1 00:11:11.600 --rc geninfo_unexecuted_blocks=1 00:11:11.600 00:11:11.600 ' 00:11:11.600 06:37:30 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:11.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.600 --rc genhtml_branch_coverage=1 00:11:11.600 --rc genhtml_function_coverage=1 00:11:11.600 --rc genhtml_legend=1 00:11:11.600 --rc geninfo_all_blocks=1 00:11:11.600 --rc geninfo_unexecuted_blocks=1 00:11:11.600 00:11:11.600 ' 00:11:11.600 06:37:30 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:11.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.600 --rc genhtml_branch_coverage=1 00:11:11.600 --rc genhtml_function_coverage=1 00:11:11.600 --rc genhtml_legend=1 00:11:11.600 --rc geninfo_all_blocks=1 00:11:11.600 --rc geninfo_unexecuted_blocks=1 00:11:11.600 00:11:11.600 ' 00:11:11.600 06:37:30 version -- app/version.sh@17 -- # get_header_version major 00:11:11.600 06:37:30 version -- app/version.sh@14 -- # cut -f2 00:11:11.600 06:37:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:11.600 06:37:30 version -- app/version.sh@14 -- # tr -d '"' 00:11:11.600 06:37:30 version -- app/version.sh@17 -- # major=25 00:11:11.600 06:37:30 version -- app/version.sh@18 -- # get_header_version minor 00:11:11.600 06:37:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:11.600 06:37:30 version -- app/version.sh@14 -- # cut -f2 00:11:11.600 06:37:30 version -- app/version.sh@14 -- # tr -d '"' 00:11:11.600 06:37:30 version -- app/version.sh@18 -- # minor=1 00:11:11.600 06:37:30 version -- app/version.sh@19 -- # get_header_version patch 00:11:11.600 06:37:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:11.600 06:37:30 version -- app/version.sh@14 -- # cut -f2 00:11:11.600 06:37:30 version -- app/version.sh@14 -- # tr -d '"' 00:11:11.600 06:37:30 version -- app/version.sh@19 -- # patch=0 00:11:11.600 06:37:30 version -- app/version.sh@20 -- # get_header_version suffix 00:11:11.600 06:37:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:11.600 06:37:30 version -- app/version.sh@14 -- # cut -f2 00:11:11.600 06:37:30 version -- app/version.sh@14 -- # tr -d '"' 00:11:11.600 06:37:30 version -- app/version.sh@20 -- # suffix=-pre 00:11:11.600 06:37:30 version -- app/version.sh@22 -- # version=25.1 00:11:11.600 06:37:30 version -- app/version.sh@25 -- # (( patch != 0 )) 00:11:11.600 06:37:30 version -- app/version.sh@28 -- # version=25.1rc0 00:11:11.600 06:37:30 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:11.600 06:37:30 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:11.600 06:37:30 version -- app/version.sh@30 -- # py_version=25.1rc0 00:11:11.600 06:37:30 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:11:11.600 00:11:11.600 real 0m0.260s 00:11:11.600 user 0m0.183s 00:11:11.600 sys 0m0.118s 00:11:11.600 ************************************ 00:11:11.600 END TEST version 00:11:11.600 ************************************ 00:11:11.600 06:37:30 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.600 06:37:30 version -- common/autotest_common.sh@10 -- # set +x 00:11:11.601 06:37:30 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:11:11.601 06:37:30 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:11:11.601 06:37:30 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:11:11.601 06:37:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:11.601 06:37:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.601 06:37:30 -- common/autotest_common.sh@10 -- # set +x 00:11:11.601 ************************************ 00:11:11.601 START TEST bdev_raid 00:11:11.601 ************************************ 00:11:11.601 06:37:30 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:11:11.601 * Looking for test storage... 00:11:11.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:11.601 06:37:30 bdev_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:11.601 06:37:30 bdev_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:11:11.601 06:37:30 bdev_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:11.905 06:37:30 bdev_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:11.905 06:37:30 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:11.905 06:37:30 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:11.905 06:37:30 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:11.905 06:37:30 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:11:11.905 06:37:30 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:11:11.905 06:37:30 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:11:11.905 06:37:30 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:11:11.905 06:37:30 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:11:11.905 06:37:30 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:11:11.905 06:37:30 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:11:11.905 06:37:30 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:11.905 06:37:30 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:11:11.905 06:37:30 bdev_raid -- scripts/common.sh@345 -- # : 1 00:11:11.905 06:37:30 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:11.905 06:37:30 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:11.905 06:37:30 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:11:11.905 06:37:30 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:11:11.905 06:37:30 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:11.905 06:37:30 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:11:11.905 06:37:30 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:11.905 06:37:30 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:11:11.905 06:37:30 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:11:11.905 06:37:30 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:11.905 06:37:30 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:11:11.905 06:37:30 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:11.905 06:37:30 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:11.905 06:37:30 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:11.905 06:37:30 bdev_raid -- scripts/common.sh@368 -- # return 0 00:11:11.905 06:37:30 bdev_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:11.905 06:37:30 bdev_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:11.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.905 --rc genhtml_branch_coverage=1 00:11:11.905 --rc genhtml_function_coverage=1 00:11:11.905 --rc genhtml_legend=1 00:11:11.905 --rc geninfo_all_blocks=1 00:11:11.905 --rc geninfo_unexecuted_blocks=1 00:11:11.905 00:11:11.905 ' 00:11:11.905 06:37:30 bdev_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:11.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.905 --rc genhtml_branch_coverage=1 00:11:11.905 --rc genhtml_function_coverage=1 00:11:11.905 --rc genhtml_legend=1 00:11:11.905 --rc geninfo_all_blocks=1 00:11:11.905 --rc geninfo_unexecuted_blocks=1 00:11:11.905 00:11:11.905 ' 00:11:11.905 06:37:30 bdev_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:11.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.905 --rc genhtml_branch_coverage=1 00:11:11.905 --rc genhtml_function_coverage=1 00:11:11.905 --rc genhtml_legend=1 00:11:11.905 --rc geninfo_all_blocks=1 00:11:11.905 --rc geninfo_unexecuted_blocks=1 00:11:11.905 00:11:11.905 ' 00:11:11.905 06:37:30 bdev_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:11.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.905 --rc genhtml_branch_coverage=1 00:11:11.905 --rc genhtml_function_coverage=1 00:11:11.905 --rc genhtml_legend=1 00:11:11.905 --rc geninfo_all_blocks=1 00:11:11.905 --rc geninfo_unexecuted_blocks=1 00:11:11.905 00:11:11.905 ' 00:11:11.905 06:37:30 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:11.905 06:37:30 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:11:11.905 06:37:30 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:11:11.905 06:37:30 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:11:11.905 06:37:30 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:11:11.905 06:37:30 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:11:11.905 06:37:30 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:11:11.905 06:37:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:11.905 06:37:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.905 06:37:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:11.905 ************************************ 00:11:11.905 START TEST raid1_resize_data_offset_test 00:11:11.905 ************************************ 00:11:11.905 06:37:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:11:11.905 06:37:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60150 00:11:11.905 06:37:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60150' 00:11:11.905 06:37:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:11.905 Process raid pid: 60150 00:11:11.905 06:37:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60150 00:11:11.905 06:37:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60150 ']' 00:11:11.905 06:37:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.905 06:37:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.905 06:37:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.905 06:37:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.905 06:37:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.905 [2024-12-06 06:37:30.446317] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:11:11.905 [2024-12-06 06:37:30.446789] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.165 [2024-12-06 06:37:30.626166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.165 [2024-12-06 06:37:30.765213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.425 [2024-12-06 06:37:30.991848] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.425 [2024-12-06 06:37:30.991910] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.990 06:37:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.990 06:37:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:11:12.990 06:37:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:11:12.990 06:37:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.990 06:37:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.990 malloc0 00:11:12.990 06:37:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.990 06:37:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:11:12.990 06:37:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.990 06:37:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.249 malloc1 00:11:13.249 06:37:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.249 06:37:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:11:13.249 06:37:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.249 06:37:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.249 null0 00:11:13.249 06:37:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.249 06:37:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:11:13.249 06:37:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.249 06:37:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.249 [2024-12-06 06:37:31.713178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:11:13.249 [2024-12-06 06:37:31.715776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:13.249 [2024-12-06 06:37:31.716013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:11:13.249 [2024-12-06 06:37:31.716235] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:13.249 [2024-12-06 06:37:31.716259] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:11:13.249 [2024-12-06 06:37:31.716609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:11:13.249 [2024-12-06 06:37:31.716834] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:13.249 [2024-12-06 06:37:31.716858] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:11:13.249 [2024-12-06 06:37:31.717090] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.249 06:37:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.249 06:37:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.249 06:37:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.249 06:37:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.249 06:37:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:11:13.249 06:37:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.249 06:37:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:11:13.249 06:37:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:11:13.249 06:37:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.249 06:37:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.249 [2024-12-06 06:37:31.777292] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:11:13.249 06:37:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.249 06:37:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:11:13.249 06:37:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.249 06:37:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.834 malloc2 00:11:13.834 06:37:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.834 06:37:32 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:11:13.834 06:37:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.834 06:37:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.834 [2024-12-06 06:37:32.340224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:13.834 [2024-12-06 06:37:32.357877] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:13.834 06:37:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.834 [2024-12-06 06:37:32.361021] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:11:13.834 06:37:32 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.834 06:37:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.834 06:37:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.834 06:37:32 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:11:13.834 06:37:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.834 06:37:32 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:11:13.834 06:37:32 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60150 00:11:13.834 06:37:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60150 ']' 00:11:13.834 06:37:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60150 00:11:13.834 06:37:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:11:13.834 06:37:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:13.834 06:37:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60150 00:11:13.834 killing process with pid 60150 00:11:13.834 06:37:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:13.834 06:37:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:13.834 06:37:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60150' 00:11:13.834 06:37:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60150 00:11:13.834 06:37:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60150 00:11:13.834 [2024-12-06 06:37:32.451499] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:13.834 [2024-12-06 06:37:32.452872] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:11:13.834 [2024-12-06 06:37:32.452982] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.834 [2024-12-06 06:37:32.453010] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:11:14.092 [2024-12-06 06:37:32.486384] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.092 [2024-12-06 06:37:32.487088] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.092 [2024-12-06 06:37:32.487138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:11:15.994 [2024-12-06 06:37:34.188803] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:16.930 ************************************ 00:11:16.930 END TEST raid1_resize_data_offset_test 00:11:16.930 ************************************ 00:11:16.930 06:37:35 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:11:16.930 00:11:16.930 real 0m4.949s 00:11:16.930 user 0m4.960s 00:11:16.930 sys 0m0.671s 00:11:16.930 06:37:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.930 06:37:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.930 06:37:35 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:11:16.930 06:37:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:16.930 06:37:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.930 06:37:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:16.930 ************************************ 00:11:16.930 START TEST raid0_resize_superblock_test 00:11:16.930 ************************************ 00:11:16.930 06:37:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:11:16.930 06:37:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:11:16.930 Process raid pid: 60239 00:11:16.930 06:37:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60239 00:11:16.930 06:37:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60239' 00:11:16.930 06:37:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60239 00:11:16.930 06:37:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:16.930 06:37:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60239 ']' 00:11:16.930 06:37:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.930 06:37:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.931 06:37:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.931 06:37:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.931 06:37:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.931 [2024-12-06 06:37:35.448250] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:11:16.931 [2024-12-06 06:37:35.449121] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.189 [2024-12-06 06:37:35.632144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.189 [2024-12-06 06:37:35.795305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.448 [2024-12-06 06:37:36.012635] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.448 [2024-12-06 06:37:36.012691] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:18.013 06:37:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:18.013 06:37:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:18.013 06:37:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:11:18.013 06:37:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.014 06:37:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.581 malloc0 00:11:18.581 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.581 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:11:18.581 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.581 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.581 [2024-12-06 06:37:37.025917] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:11:18.581 [2024-12-06 06:37:37.026004] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.581 [2024-12-06 06:37:37.026054] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:18.581 [2024-12-06 06:37:37.026077] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.581 [2024-12-06 06:37:37.029170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.581 [2024-12-06 06:37:37.029256] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:11:18.581 pt0 00:11:18.581 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.581 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:11:18.581 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.581 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.581 60394162-7bf9-4d96-810d-27a8a6f4a7e7 00:11:18.581 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.581 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:11:18.581 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.581 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.581 0b8484dc-f2e6-4b52-97e0-d91c4090872e 00:11:18.581 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.581 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:11:18.581 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.581 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.581 87130ac8-dc9a-49ee-8733-f81d4ae80e31 00:11:18.581 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.581 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:11:18.581 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:11:18.581 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.581 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.581 [2024-12-06 06:37:37.178265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0b8484dc-f2e6-4b52-97e0-d91c4090872e is claimed 00:11:18.581 [2024-12-06 06:37:37.178631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 87130ac8-dc9a-49ee-8733-f81d4ae80e31 is claimed 00:11:18.581 [2024-12-06 06:37:37.179039] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:18.581 [2024-12-06 06:37:37.179192] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:11:18.581 [2024-12-06 06:37:37.179734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:18.581 [2024-12-06 06:37:37.180164] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:18.581 [2024-12-06 06:37:37.180306] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:11:18.581 [2024-12-06 06:37:37.180689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.581 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.581 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:11:18.581 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.581 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:11:18.581 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.581 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.840 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:11:18.840 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:11:18.840 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.840 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.840 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:11:18.840 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.840 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:11:18.840 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:18.840 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:11:18.840 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:18.840 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:18.840 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.840 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.840 [2024-12-06 06:37:37.310609] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:18.840 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.840 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:18.840 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:18.840 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:11:18.840 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:11:18.840 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.840 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.840 [2024-12-06 06:37:37.358679] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:18.840 [2024-12-06 06:37:37.358737] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '0b8484dc-f2e6-4b52-97e0-d91c4090872e' was resized: old size 131072, new size 204800 00:11:18.840 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.840 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:11:18.840 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.841 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.841 [2024-12-06 06:37:37.366613] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:18.841 [2024-12-06 06:37:37.366648] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '87130ac8-dc9a-49ee-8733-f81d4ae80e31' was resized: old size 131072, new size 204800 00:11:18.841 [2024-12-06 06:37:37.366697] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:11:18.841 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.841 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:11:18.841 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:11:18.841 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.841 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.841 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.841 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:11:18.841 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:11:18.841 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:11:18.841 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.841 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.841 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.841 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:11:18.841 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:18.841 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:18.841 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:18.841 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:11:18.841 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.841 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.099 [2024-12-06 06:37:37.486695] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:19.099 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.099 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:19.099 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:19.099 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:11:19.099 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:11:19.099 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.099 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.099 [2024-12-06 06:37:37.538643] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:11:19.099 [2024-12-06 06:37:37.539059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:11:19.099 [2024-12-06 06:37:37.539313] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:19.099 [2024-12-06 06:37:37.539387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:11:19.099 [2024-12-06 06:37:37.539701] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.099 [2024-12-06 06:37:37.539806] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:19.099 [2024-12-06 06:37:37.539847] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:11:19.099 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.099 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:11:19.099 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.099 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.099 [2024-12-06 06:37:37.550353] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:11:19.099 [2024-12-06 06:37:37.550668] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.099 [2024-12-06 06:37:37.550868] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:11:19.099 [2024-12-06 06:37:37.550921] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.099 [2024-12-06 06:37:37.555437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.099 [2024-12-06 06:37:37.555599] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:11:19.099 pt0 00:11:19.099 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.099 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:11:19.099 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.099 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.099 [2024-12-06 06:37:37.560178] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 0b8484dc-f2e6-4b52-97e0-d91c4090872e 00:11:19.099 [2024-12-06 06:37:37.560593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0b8484dc-f2e6-4b52-97e0-d91c4090872e is claimed 00:11:19.099 [2024-12-06 06:37:37.560900] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 87130ac8-dc9a-49ee-8733-f81d4ae80e31 00:11:19.099 [2024-12-06 06:37:37.561000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 87130ac8-dc9a-49ee-8733-f81d4ae80e31 is claimed 00:11:19.099 [2024-12-06 06:37:37.561633] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 87130ac8-dc9a-49ee-8733-f81d4ae80e31 (2) smaller than existing raid bdev Raid (3) 00:11:19.099 [2024-12-06 06:37:37.561697] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 0b8484dc-f2e6-4b52-97e0-d91c4090872e: File exists 00:11:19.099 [2024-12-06 06:37:37.561828] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:19.099 [2024-12-06 06:37:37.561881] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:11:19.099 [2024-12-06 06:37:37.562578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:19.099 [2024-12-06 06:37:37.563055] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:19.099 [2024-12-06 06:37:37.563090] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:11:19.099 [2024-12-06 06:37:37.563518] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.100 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.100 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:19.100 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:19.100 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:19.100 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:11:19.100 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.100 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.100 [2024-12-06 06:37:37.572834] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:19.100 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.100 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:19.100 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:19.100 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:11:19.100 06:37:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60239 00:11:19.100 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60239 ']' 00:11:19.100 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60239 00:11:19.100 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:19.100 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.100 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60239 00:11:19.100 killing process with pid 60239 00:11:19.100 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:19.100 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:19.100 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60239' 00:11:19.100 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60239 00:11:19.100 06:37:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60239 00:11:19.100 [2024-12-06 06:37:37.654417] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:19.100 [2024-12-06 06:37:37.654622] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.100 [2024-12-06 06:37:37.654749] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:19.100 [2024-12-06 06:37:37.654790] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:11:20.473 [2024-12-06 06:37:39.013310] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:21.847 06:37:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:11:21.847 00:11:21.847 real 0m4.749s 00:11:21.847 user 0m5.103s 00:11:21.847 sys 0m0.623s 00:11:21.847 06:37:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.847 06:37:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.847 ************************************ 00:11:21.847 END TEST raid0_resize_superblock_test 00:11:21.847 ************************************ 00:11:21.847 06:37:40 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:11:21.847 06:37:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:21.847 06:37:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.847 06:37:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:21.847 ************************************ 00:11:21.847 START TEST raid1_resize_superblock_test 00:11:21.847 ************************************ 00:11:21.847 06:37:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:11:21.848 06:37:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:11:21.848 06:37:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60332 00:11:21.848 06:37:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60332' 00:11:21.848 Process raid pid: 60332 00:11:21.848 06:37:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:21.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.848 06:37:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60332 00:11:21.848 06:37:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60332 ']' 00:11:21.848 06:37:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.848 06:37:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.848 06:37:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.848 06:37:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.848 06:37:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.848 [2024-12-06 06:37:40.250119] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:11:21.848 [2024-12-06 06:37:40.250293] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.848 [2024-12-06 06:37:40.429101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.106 [2024-12-06 06:37:40.563923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.364 [2024-12-06 06:37:40.773687] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:22.365 [2024-12-06 06:37:40.773978] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:22.623 06:37:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:22.623 06:37:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:22.623 06:37:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:11:22.623 06:37:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.623 06:37:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.188 malloc0 00:11:23.188 06:37:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.188 06:37:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:11:23.188 06:37:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.188 06:37:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.188 [2024-12-06 06:37:41.780811] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:11:23.189 [2024-12-06 06:37:41.780911] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.189 [2024-12-06 06:37:41.780947] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:23.189 [2024-12-06 06:37:41.780971] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.189 [2024-12-06 06:37:41.783976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.189 [2024-12-06 06:37:41.784029] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:11:23.189 pt0 00:11:23.189 06:37:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.189 06:37:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:11:23.189 06:37:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.189 06:37:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.447 56a8ca48-651f-4168-8b56-e24ce998edf5 00:11:23.447 06:37:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.447 06:37:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:11:23.447 06:37:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.447 06:37:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.447 0774f5b4-33ad-40e2-b0fb-5fd358b7ca6c 00:11:23.447 06:37:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.447 06:37:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:11:23.447 06:37:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.447 06:37:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.447 c52f6307-92d4-41aa-92ac-da1186dc7624 00:11:23.447 06:37:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.447 06:37:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:11:23.447 06:37:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:11:23.447 06:37:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.447 06:37:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.447 [2024-12-06 06:37:41.934362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0774f5b4-33ad-40e2-b0fb-5fd358b7ca6c is claimed 00:11:23.447 [2024-12-06 06:37:41.934566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c52f6307-92d4-41aa-92ac-da1186dc7624 is claimed 00:11:23.447 [2024-12-06 06:37:41.934804] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:23.447 [2024-12-06 06:37:41.934830] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:11:23.447 [2024-12-06 06:37:41.935202] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:23.447 [2024-12-06 06:37:41.935491] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:23.447 [2024-12-06 06:37:41.935509] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:11:23.447 [2024-12-06 06:37:41.935744] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.447 06:37:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.447 06:37:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:11:23.447 06:37:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.447 06:37:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.447 06:37:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:11:23.447 06:37:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.447 06:37:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:11:23.447 06:37:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:11:23.447 06:37:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:11:23.447 06:37:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.447 06:37:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.447 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.447 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:11:23.447 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:23.447 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:23.447 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:23.447 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:11:23.447 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.447 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.447 [2024-12-06 06:37:42.054672] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.447 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.706 [2024-12-06 06:37:42.102669] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:23.706 [2024-12-06 06:37:42.102708] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '0774f5b4-33ad-40e2-b0fb-5fd358b7ca6c' was resized: old size 131072, new size 204800 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.706 [2024-12-06 06:37:42.110577] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:23.706 [2024-12-06 06:37:42.110610] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'c52f6307-92d4-41aa-92ac-da1186dc7624' was resized: old size 131072, new size 204800 00:11:23.706 [2024-12-06 06:37:42.110649] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.706 [2024-12-06 06:37:42.218676] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:23.706 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:11:23.707 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:11:23.707 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.707 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.707 [2024-12-06 06:37:42.266429] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:11:23.707 [2024-12-06 06:37:42.266553] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:11:23.707 [2024-12-06 06:37:42.266594] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:11:23.707 [2024-12-06 06:37:42.266806] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:23.707 [2024-12-06 06:37:42.267072] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.707 [2024-12-06 06:37:42.267176] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:23.707 [2024-12-06 06:37:42.267206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:11:23.707 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.707 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:11:23.707 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.707 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.707 [2024-12-06 06:37:42.274291] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:11:23.707 [2024-12-06 06:37:42.274466] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.707 [2024-12-06 06:37:42.274506] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:11:23.707 [2024-12-06 06:37:42.274543] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.707 [2024-12-06 06:37:42.277377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.707 [2024-12-06 06:37:42.277427] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:11:23.707 pt0 00:11:23.707 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.707 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:11:23.707 [2024-12-06 06:37:42.279845] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 0774f5b4-33ad-40e2-b0fb-5fd358b7ca6c 00:11:23.707 [2024-12-06 06:37:42.279927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0774f5b4-33ad-40e2-b0fb-5fd358b7ca6c is claimed 00:11:23.707 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.707 [2024-12-06 06:37:42.280066] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev c52f6307-92d4-41aa-92ac-da1186dc7624 00:11:23.707 [2024-12-06 06:37:42.280099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c52f6307-92d4-41aa-92ac-da1186dc7624 is claimed 00:11:23.707 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.707 [2024-12-06 06:37:42.280249] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev c52f6307-92d4-41aa-92ac-da1186dc7624 (2) smaller than existing raid bdev Raid (3) 00:11:23.707 [2024-12-06 06:37:42.280281] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 0774f5b4-33ad-40e2-b0fb-5fd358b7ca6c: File exists 00:11:23.707 [2024-12-06 06:37:42.280337] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:23.707 [2024-12-06 06:37:42.280356] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:23.707 [2024-12-06 06:37:42.280685] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:23.707 [2024-12-06 06:37:42.281028] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:23.707 [2024-12-06 06:37:42.281052] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:11:23.707 [2024-12-06 06:37:42.281254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.707 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.707 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:23.707 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:23.707 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:23.707 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:11:23.707 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.707 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.707 [2024-12-06 06:37:42.294669] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.707 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.707 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:23.707 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:23.707 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:11:23.707 06:37:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60332 00:11:23.707 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60332 ']' 00:11:23.707 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60332 00:11:23.707 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:23.707 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.707 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60332 00:11:23.965 killing process with pid 60332 00:11:23.965 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:23.965 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:23.965 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60332' 00:11:23.965 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60332 00:11:23.965 [2024-12-06 06:37:42.368956] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:23.965 06:37:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60332 00:11:23.966 [2024-12-06 06:37:42.369060] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.966 [2024-12-06 06:37:42.369134] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:23.966 [2024-12-06 06:37:42.369149] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:11:25.342 [2024-12-06 06:37:43.663230] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:26.281 06:37:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:11:26.281 00:11:26.281 real 0m4.587s 00:11:26.281 user 0m4.885s 00:11:26.281 sys 0m0.638s 00:11:26.281 06:37:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.281 06:37:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.281 ************************************ 00:11:26.281 END TEST raid1_resize_superblock_test 00:11:26.281 ************************************ 00:11:26.281 06:37:44 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:11:26.281 06:37:44 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:11:26.281 06:37:44 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:11:26.281 06:37:44 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:11:26.281 06:37:44 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:11:26.281 06:37:44 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:11:26.281 06:37:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:26.281 06:37:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.281 06:37:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:26.281 ************************************ 00:11:26.281 START TEST raid_function_test_raid0 00:11:26.281 ************************************ 00:11:26.281 06:37:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:11:26.281 06:37:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:11:26.281 06:37:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:11:26.281 06:37:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:11:26.281 06:37:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60440 00:11:26.281 Process raid pid: 60440 00:11:26.281 06:37:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60440' 00:11:26.281 06:37:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:26.281 06:37:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60440 00:11:26.281 06:37:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60440 ']' 00:11:26.281 06:37:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.281 06:37:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.281 06:37:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.281 06:37:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.281 06:37:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:26.281 [2024-12-06 06:37:44.905811] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:11:26.281 [2024-12-06 06:37:44.905980] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.539 [2024-12-06 06:37:45.076341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.798 [2024-12-06 06:37:45.209410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.798 [2024-12-06 06:37:45.417686] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.798 [2024-12-06 06:37:45.417744] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.364 06:37:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:27.364 06:37:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:11:27.364 06:37:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:11:27.364 06:37:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.364 06:37:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:27.364 Base_1 00:11:27.364 06:37:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.364 06:37:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:11:27.364 06:37:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.364 06:37:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:27.364 Base_2 00:11:27.364 06:37:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.364 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:11:27.364 06:37:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.364 06:37:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:27.623 [2024-12-06 06:37:46.010817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:27.623 [2024-12-06 06:37:46.013228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:27.623 [2024-12-06 06:37:46.013328] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:27.623 [2024-12-06 06:37:46.013349] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:27.623 [2024-12-06 06:37:46.013725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:27.623 [2024-12-06 06:37:46.013924] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:27.623 [2024-12-06 06:37:46.013940] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:11:27.623 [2024-12-06 06:37:46.014136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.623 06:37:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.623 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:27.623 06:37:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.623 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:11:27.623 06:37:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:27.623 06:37:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.623 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:11:27.623 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:11:27.623 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:11:27.623 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:27.623 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:11:27.623 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:27.623 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:27.623 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:27.623 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:11:27.623 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:27.623 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:27.623 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:11:27.882 [2024-12-06 06:37:46.322991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:27.882 /dev/nbd0 00:11:27.882 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:27.882 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:27.882 06:37:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:27.882 06:37:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:11:27.882 06:37:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:27.882 06:37:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:27.882 06:37:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:27.882 06:37:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:11:27.882 06:37:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:27.882 06:37:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:27.882 06:37:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.882 1+0 records in 00:11:27.882 1+0 records out 00:11:27.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350659 s, 11.7 MB/s 00:11:27.882 06:37:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.882 06:37:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:11:27.882 06:37:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.882 06:37:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:27.882 06:37:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:11:27.882 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:27.882 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:27.882 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:11:27.882 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:11:27.882 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:11:28.140 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:28.140 { 00:11:28.140 "nbd_device": "/dev/nbd0", 00:11:28.140 "bdev_name": "raid" 00:11:28.140 } 00:11:28.140 ]' 00:11:28.140 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:28.140 { 00:11:28.140 "nbd_device": "/dev/nbd0", 00:11:28.140 "bdev_name": "raid" 00:11:28.140 } 00:11:28.140 ]' 00:11:28.140 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:28.140 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:11:28.140 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:28.140 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:11:28.140 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:11:28.140 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:11:28.140 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:11:28.140 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:11:28.140 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:11:28.140 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:11:28.140 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:11:28.140 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:11:28.140 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:11:28.140 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:11:28.140 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:11:28.140 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:11:28.140 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:11:28.140 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:11:28.140 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:11:28.140 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:11:28.140 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:11:28.140 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:11:28.140 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:11:28.140 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:11:28.140 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:11:28.411 4096+0 records in 00:11:28.411 4096+0 records out 00:11:28.411 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0303777 s, 69.0 MB/s 00:11:28.411 06:37:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:11:28.670 4096+0 records in 00:11:28.670 4096+0 records out 00:11:28.670 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.303844 s, 6.9 MB/s 00:11:28.670 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:11:28.670 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:28.670 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:11:28.670 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:28.670 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:11:28.670 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:11:28.670 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:11:28.670 128+0 records in 00:11:28.670 128+0 records out 00:11:28.670 65536 bytes (66 kB, 64 KiB) copied, 0.000873325 s, 75.0 MB/s 00:11:28.670 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:11:28.670 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:11:28.670 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:28.670 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:11:28.670 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:28.670 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:11:28.670 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:11:28.670 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:11:28.670 2035+0 records in 00:11:28.670 2035+0 records out 00:11:28.670 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0138056 s, 75.5 MB/s 00:11:28.670 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:11:28.670 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:11:28.670 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:28.670 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:11:28.670 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:28.670 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:11:28.670 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:11:28.670 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:11:28.670 456+0 records in 00:11:28.670 456+0 records out 00:11:28.670 233472 bytes (233 kB, 228 KiB) copied, 0.0026958 s, 86.6 MB/s 00:11:28.670 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:11:28.670 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:11:28.671 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:28.671 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:11:28.671 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:28.671 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:11:28.671 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:28.671 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:28.671 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:28.671 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:28.671 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:11:28.671 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:28.671 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:28.929 [2024-12-06 06:37:47.528301] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.929 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:28.929 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:28.929 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:28.929 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:28.929 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:28.929 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:28.929 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:11:28.929 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:11:28.929 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:11:28.929 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:11:28.929 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:11:29.289 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:29.290 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:29.290 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:29.557 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:29.557 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:29.557 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:29.557 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:11:29.557 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:11:29.557 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:29.557 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:11:29.557 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:11:29.557 06:37:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60440 00:11:29.557 06:37:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60440 ']' 00:11:29.557 06:37:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60440 00:11:29.557 06:37:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:11:29.557 06:37:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:29.557 06:37:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60440 00:11:29.557 killing process with pid 60440 00:11:29.557 06:37:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:29.557 06:37:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:29.557 06:37:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60440' 00:11:29.557 06:37:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60440 00:11:29.557 [2024-12-06 06:37:47.948900] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:29.557 06:37:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60440 00:11:29.557 [2024-12-06 06:37:47.949022] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.557 [2024-12-06 06:37:47.949093] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:29.557 [2024-12-06 06:37:47.949116] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:11:29.557 [2024-12-06 06:37:48.140328] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:30.933 ************************************ 00:11:30.933 END TEST raid_function_test_raid0 00:11:30.933 ************************************ 00:11:30.933 06:37:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:11:30.933 00:11:30.933 real 0m4.417s 00:11:30.933 user 0m5.466s 00:11:30.933 sys 0m1.021s 00:11:30.933 06:37:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.933 06:37:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:30.933 06:37:49 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:11:30.934 06:37:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:30.934 06:37:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.934 06:37:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:30.934 ************************************ 00:11:30.934 START TEST raid_function_test_concat 00:11:30.934 ************************************ 00:11:30.934 06:37:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:11:30.934 06:37:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:11:30.934 06:37:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:11:30.934 06:37:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:11:30.934 Process raid pid: 60569 00:11:30.934 06:37:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60569 00:11:30.934 06:37:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60569' 00:11:30.934 06:37:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60569 00:11:30.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.934 06:37:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60569 ']' 00:11:30.934 06:37:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:30.934 06:37:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.934 06:37:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:30.934 06:37:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.934 06:37:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:30.934 06:37:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:30.934 [2024-12-06 06:37:49.384607] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:11:30.934 [2024-12-06 06:37:49.384782] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.934 [2024-12-06 06:37:49.559065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.191 [2024-12-06 06:37:49.696014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.449 [2024-12-06 06:37:49.913364] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.449 [2024-12-06 06:37:49.913426] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:32.015 06:37:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:32.015 06:37:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:11:32.015 06:37:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:11:32.015 06:37:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.015 06:37:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:32.015 Base_1 00:11:32.016 06:37:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.016 06:37:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:11:32.016 06:37:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.016 06:37:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:32.016 Base_2 00:11:32.016 06:37:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.016 06:37:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:11:32.016 06:37:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.016 06:37:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:32.016 [2024-12-06 06:37:50.542566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:32.016 [2024-12-06 06:37:50.545251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:32.016 [2024-12-06 06:37:50.545357] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:32.016 [2024-12-06 06:37:50.545377] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:32.016 [2024-12-06 06:37:50.545766] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:32.016 [2024-12-06 06:37:50.545966] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:32.016 [2024-12-06 06:37:50.545981] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:11:32.016 [2024-12-06 06:37:50.546186] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.016 06:37:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.016 06:37:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:32.016 06:37:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.016 06:37:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:32.016 06:37:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:11:32.016 06:37:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.016 06:37:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:11:32.016 06:37:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:11:32.016 06:37:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:11:32.016 06:37:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:32.016 06:37:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:11:32.016 06:37:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:32.016 06:37:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:32.016 06:37:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:32.016 06:37:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:11:32.016 06:37:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:32.016 06:37:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:32.016 06:37:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:11:32.274 [2024-12-06 06:37:50.898726] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:32.532 /dev/nbd0 00:11:32.533 06:37:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:32.533 06:37:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:32.533 06:37:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:32.533 06:37:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:11:32.533 06:37:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:32.533 06:37:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:32.533 06:37:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:32.533 06:37:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:11:32.533 06:37:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:32.533 06:37:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:32.533 06:37:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:32.533 1+0 records in 00:11:32.533 1+0 records out 00:11:32.533 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367384 s, 11.1 MB/s 00:11:32.533 06:37:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.533 06:37:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:11:32.533 06:37:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.533 06:37:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:32.533 06:37:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:11:32.533 06:37:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:32.533 06:37:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:32.533 06:37:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:11:32.533 06:37:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:11:32.533 06:37:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:11:32.791 06:37:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:32.791 { 00:11:32.791 "nbd_device": "/dev/nbd0", 00:11:32.791 "bdev_name": "raid" 00:11:32.791 } 00:11:32.791 ]' 00:11:32.791 06:37:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:32.791 { 00:11:32.791 "nbd_device": "/dev/nbd0", 00:11:32.791 "bdev_name": "raid" 00:11:32.791 } 00:11:32.791 ]' 00:11:32.791 06:37:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:32.791 06:37:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:11:32.791 06:37:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:11:32.792 06:37:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:32.792 06:37:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:11:32.792 06:37:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:11:32.792 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:11:32.792 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:11:32.792 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:11:32.792 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:11:32.792 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:11:32.792 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:11:32.792 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:11:32.792 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:11:32.792 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:11:32.792 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:11:32.792 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:11:32.792 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:11:32.792 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:11:32.792 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:11:32.792 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:11:32.792 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:11:32.792 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:11:32.792 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:11:32.792 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:11:33.051 4096+0 records in 00:11:33.051 4096+0 records out 00:11:33.051 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0334933 s, 62.6 MB/s 00:11:33.051 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:11:33.376 4096+0 records in 00:11:33.376 4096+0 records out 00:11:33.376 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.331827 s, 6.3 MB/s 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:11:33.376 128+0 records in 00:11:33.376 128+0 records out 00:11:33.376 65536 bytes (66 kB, 64 KiB) copied, 0.00151109 s, 43.4 MB/s 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:11:33.376 2035+0 records in 00:11:33.376 2035+0 records out 00:11:33.376 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00839386 s, 124 MB/s 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:11:33.376 456+0 records in 00:11:33.376 456+0 records out 00:11:33.376 233472 bytes (233 kB, 228 KiB) copied, 0.00328532 s, 71.1 MB/s 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.376 06:37:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:33.636 [2024-12-06 06:37:52.206546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.636 06:37:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:33.636 06:37:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:33.636 06:37:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:33.636 06:37:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.636 06:37:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.636 06:37:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:33.636 06:37:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:11:33.636 06:37:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.636 06:37:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:11:33.636 06:37:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:11:33.636 06:37:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:11:33.895 06:37:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:33.895 06:37:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:33.895 06:37:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:33.895 06:37:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:33.895 06:37:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:33.895 06:37:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:33.895 06:37:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:11:33.895 06:37:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:11:33.895 06:37:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:33.895 06:37:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:11:33.895 06:37:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:11:33.895 06:37:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60569 00:11:33.895 06:37:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60569 ']' 00:11:33.895 06:37:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60569 00:11:34.154 06:37:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:11:34.154 06:37:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:34.154 06:37:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60569 00:11:34.154 killing process with pid 60569 00:11:34.154 06:37:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:34.154 06:37:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:34.154 06:37:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60569' 00:11:34.154 06:37:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60569 00:11:34.154 [2024-12-06 06:37:52.572513] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:34.154 06:37:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60569 00:11:34.154 [2024-12-06 06:37:52.572673] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:34.154 [2024-12-06 06:37:52.572753] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:34.154 [2024-12-06 06:37:52.572772] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:11:34.154 [2024-12-06 06:37:52.762343] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:35.528 06:37:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:11:35.528 00:11:35.528 real 0m4.547s 00:11:35.528 user 0m5.632s 00:11:35.528 sys 0m1.076s 00:11:35.528 06:37:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.528 ************************************ 00:11:35.528 END TEST raid_function_test_concat 00:11:35.528 ************************************ 00:11:35.528 06:37:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:35.528 06:37:53 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:11:35.528 06:37:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:35.528 06:37:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.528 06:37:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:35.528 ************************************ 00:11:35.528 START TEST raid0_resize_test 00:11:35.528 ************************************ 00:11:35.528 06:37:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:11:35.528 06:37:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:11:35.528 06:37:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:11:35.529 06:37:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:11:35.529 06:37:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:11:35.529 06:37:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:11:35.529 06:37:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:11:35.529 06:37:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:11:35.529 06:37:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:11:35.529 Process raid pid: 60704 00:11:35.529 06:37:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60704 00:11:35.529 06:37:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60704' 00:11:35.529 06:37:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:35.529 06:37:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60704 00:11:35.529 06:37:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60704 ']' 00:11:35.529 06:37:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.529 06:37:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.529 06:37:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.529 06:37:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.529 06:37:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.529 [2024-12-06 06:37:53.989337] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:11:35.529 [2024-12-06 06:37:53.989729] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.529 [2024-12-06 06:37:54.164996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.786 [2024-12-06 06:37:54.302392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.045 [2024-12-06 06:37:54.520278] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.045 [2024-12-06 06:37:54.520340] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.612 Base_1 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.612 Base_2 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.612 [2024-12-06 06:37:55.079624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:36.612 [2024-12-06 06:37:55.082143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:36.612 [2024-12-06 06:37:55.082378] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:36.612 [2024-12-06 06:37:55.082411] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:36.612 [2024-12-06 06:37:55.082734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:11:36.612 [2024-12-06 06:37:55.082889] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:36.612 [2024-12-06 06:37:55.082904] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:11:36.612 [2024-12-06 06:37:55.083079] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.612 [2024-12-06 06:37:55.087588] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:36.612 [2024-12-06 06:37:55.087621] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:11:36.612 true 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:11:36.612 [2024-12-06 06:37:55.099874] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.612 [2024-12-06 06:37:55.155723] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:36.612 [2024-12-06 06:37:55.155759] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:11:36.612 [2024-12-06 06:37:55.155804] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:11:36.612 true 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.612 [2024-12-06 06:37:55.167873] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60704 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60704 ']' 00:11:36.612 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60704 00:11:36.613 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:11:36.613 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:36.613 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60704 00:11:36.613 killing process with pid 60704 00:11:36.613 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:36.613 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:36.613 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60704' 00:11:36.613 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60704 00:11:36.613 [2024-12-06 06:37:55.249576] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:36.613 [2024-12-06 06:37:55.249702] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.613 06:37:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60704 00:11:36.613 [2024-12-06 06:37:55.249768] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:36.613 [2024-12-06 06:37:55.249783] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:11:36.871 [2024-12-06 06:37:55.265834] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:37.870 06:37:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:11:37.870 00:11:37.870 real 0m2.463s 00:11:37.870 user 0m2.752s 00:11:37.870 sys 0m0.417s 00:11:37.870 ************************************ 00:11:37.870 END TEST raid0_resize_test 00:11:37.870 ************************************ 00:11:37.870 06:37:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.870 06:37:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.870 06:37:56 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:11:37.870 06:37:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:37.870 06:37:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.870 06:37:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:37.870 ************************************ 00:11:37.870 START TEST raid1_resize_test 00:11:37.870 ************************************ 00:11:37.870 06:37:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:11:37.870 06:37:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:11:37.870 06:37:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:11:37.870 06:37:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:11:37.870 Process raid pid: 60765 00:11:37.870 06:37:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:11:37.870 06:37:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:11:37.870 06:37:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:11:37.870 06:37:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:11:37.870 06:37:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:11:37.870 06:37:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60765 00:11:37.870 06:37:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60765' 00:11:37.870 06:37:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:37.870 06:37:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60765 00:11:37.870 06:37:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60765 ']' 00:11:37.870 06:37:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.870 06:37:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:37.870 06:37:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.870 06:37:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:37.870 06:37:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.128 [2024-12-06 06:37:56.526664] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:11:38.128 [2024-12-06 06:37:56.527141] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.128 [2024-12-06 06:37:56.722144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.387 [2024-12-06 06:37:56.899651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.645 [2024-12-06 06:37:57.126566] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:38.645 [2024-12-06 06:37:57.126641] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.211 Base_1 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.211 Base_2 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.211 [2024-12-06 06:37:57.575621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:39.211 [2024-12-06 06:37:57.578218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:39.211 [2024-12-06 06:37:57.578299] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:39.211 [2024-12-06 06:37:57.578320] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:39.211 [2024-12-06 06:37:57.578685] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:11:39.211 [2024-12-06 06:37:57.578857] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:39.211 [2024-12-06 06:37:57.578873] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:11:39.211 [2024-12-06 06:37:57.579074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.211 [2024-12-06 06:37:57.583642] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:39.211 [2024-12-06 06:37:57.583690] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:11:39.211 true 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.211 [2024-12-06 06:37:57.595847] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.211 [2024-12-06 06:37:57.647662] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:39.211 [2024-12-06 06:37:57.647703] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:11:39.211 [2024-12-06 06:37:57.647748] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:11:39.211 true 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.211 06:37:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:11:39.212 [2024-12-06 06:37:57.659855] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:39.212 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.212 06:37:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:11:39.212 06:37:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:11:39.212 06:37:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:11:39.212 06:37:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:11:39.212 06:37:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:11:39.212 06:37:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60765 00:11:39.212 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60765 ']' 00:11:39.212 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60765 00:11:39.212 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:11:39.212 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:39.212 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60765 00:11:39.212 killing process with pid 60765 00:11:39.212 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:39.212 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:39.212 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60765' 00:11:39.212 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60765 00:11:39.212 [2024-12-06 06:37:57.744875] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:39.212 06:37:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60765 00:11:39.212 [2024-12-06 06:37:57.745002] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:39.212 [2024-12-06 06:37:57.745673] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:39.212 [2024-12-06 06:37:57.745865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:11:39.212 [2024-12-06 06:37:57.761814] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:40.588 06:37:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:11:40.588 00:11:40.588 real 0m2.425s 00:11:40.588 user 0m2.686s 00:11:40.588 sys 0m0.420s 00:11:40.588 06:37:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:40.588 06:37:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.589 ************************************ 00:11:40.589 END TEST raid1_resize_test 00:11:40.589 ************************************ 00:11:40.589 06:37:58 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:11:40.589 06:37:58 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:40.589 06:37:58 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:11:40.589 06:37:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:40.589 06:37:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.589 06:37:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:40.589 ************************************ 00:11:40.589 START TEST raid_state_function_test 00:11:40.589 ************************************ 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:40.589 Process raid pid: 60828 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60828 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60828' 00:11:40.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60828 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60828 ']' 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:40.589 06:37:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.589 [2024-12-06 06:37:59.002503] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:11:40.589 [2024-12-06 06:37:59.002918] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.589 [2024-12-06 06:37:59.180862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.848 [2024-12-06 06:37:59.318594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.107 [2024-12-06 06:37:59.529367] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:41.107 [2024-12-06 06:37:59.529427] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:41.675 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:41.675 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:41.675 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:41.675 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.675 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.675 [2024-12-06 06:38:00.037943] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:41.675 [2024-12-06 06:38:00.039111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:41.675 [2024-12-06 06:38:00.039143] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:41.675 [2024-12-06 06:38:00.039163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:41.675 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.675 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:41.675 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.675 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.675 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:41.675 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.675 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:41.675 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.675 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.675 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.675 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.675 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.675 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.675 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.675 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.675 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.675 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.675 "name": "Existed_Raid", 00:11:41.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.675 "strip_size_kb": 64, 00:11:41.675 "state": "configuring", 00:11:41.675 "raid_level": "raid0", 00:11:41.675 "superblock": false, 00:11:41.675 "num_base_bdevs": 2, 00:11:41.675 "num_base_bdevs_discovered": 0, 00:11:41.675 "num_base_bdevs_operational": 2, 00:11:41.675 "base_bdevs_list": [ 00:11:41.675 { 00:11:41.675 "name": "BaseBdev1", 00:11:41.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.675 "is_configured": false, 00:11:41.675 "data_offset": 0, 00:11:41.675 "data_size": 0 00:11:41.675 }, 00:11:41.675 { 00:11:41.675 "name": "BaseBdev2", 00:11:41.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.675 "is_configured": false, 00:11:41.675 "data_offset": 0, 00:11:41.675 "data_size": 0 00:11:41.675 } 00:11:41.675 ] 00:11:41.675 }' 00:11:41.675 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.675 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.243 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:42.243 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.243 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.243 [2024-12-06 06:38:00.590018] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:42.243 [2024-12-06 06:38:00.590227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:42.243 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.243 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:42.243 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.243 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.243 [2024-12-06 06:38:00.597998] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:42.243 [2024-12-06 06:38:00.598056] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:42.243 [2024-12-06 06:38:00.598073] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:42.243 [2024-12-06 06:38:00.598093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:42.243 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.243 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:42.243 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.243 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.243 [2024-12-06 06:38:00.643008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:42.243 BaseBdev1 00:11:42.243 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.243 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:42.243 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:42.243 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:42.243 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:42.243 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:42.243 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:42.243 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:42.243 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.243 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.243 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.243 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:42.243 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.243 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.243 [ 00:11:42.243 { 00:11:42.243 "name": "BaseBdev1", 00:11:42.243 "aliases": [ 00:11:42.243 "e096ebd3-e7fe-4852-9239-2e9b8302d0df" 00:11:42.243 ], 00:11:42.243 "product_name": "Malloc disk", 00:11:42.243 "block_size": 512, 00:11:42.243 "num_blocks": 65536, 00:11:42.243 "uuid": "e096ebd3-e7fe-4852-9239-2e9b8302d0df", 00:11:42.243 "assigned_rate_limits": { 00:11:42.243 "rw_ios_per_sec": 0, 00:11:42.243 "rw_mbytes_per_sec": 0, 00:11:42.243 "r_mbytes_per_sec": 0, 00:11:42.243 "w_mbytes_per_sec": 0 00:11:42.243 }, 00:11:42.243 "claimed": true, 00:11:42.243 "claim_type": "exclusive_write", 00:11:42.243 "zoned": false, 00:11:42.243 "supported_io_types": { 00:11:42.243 "read": true, 00:11:42.243 "write": true, 00:11:42.243 "unmap": true, 00:11:42.243 "flush": true, 00:11:42.243 "reset": true, 00:11:42.243 "nvme_admin": false, 00:11:42.243 "nvme_io": false, 00:11:42.243 "nvme_io_md": false, 00:11:42.243 "write_zeroes": true, 00:11:42.243 "zcopy": true, 00:11:42.243 "get_zone_info": false, 00:11:42.243 "zone_management": false, 00:11:42.243 "zone_append": false, 00:11:42.243 "compare": false, 00:11:42.243 "compare_and_write": false, 00:11:42.243 "abort": true, 00:11:42.243 "seek_hole": false, 00:11:42.243 "seek_data": false, 00:11:42.243 "copy": true, 00:11:42.244 "nvme_iov_md": false 00:11:42.244 }, 00:11:42.244 "memory_domains": [ 00:11:42.244 { 00:11:42.244 "dma_device_id": "system", 00:11:42.244 "dma_device_type": 1 00:11:42.244 }, 00:11:42.244 { 00:11:42.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.244 "dma_device_type": 2 00:11:42.244 } 00:11:42.244 ], 00:11:42.244 "driver_specific": {} 00:11:42.244 } 00:11:42.244 ] 00:11:42.244 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.244 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:42.244 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:42.244 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.244 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.244 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:42.244 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.244 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:42.244 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.244 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.244 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.244 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.244 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.244 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.244 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.244 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.244 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.244 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.244 "name": "Existed_Raid", 00:11:42.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.244 "strip_size_kb": 64, 00:11:42.244 "state": "configuring", 00:11:42.244 "raid_level": "raid0", 00:11:42.244 "superblock": false, 00:11:42.244 "num_base_bdevs": 2, 00:11:42.244 "num_base_bdevs_discovered": 1, 00:11:42.244 "num_base_bdevs_operational": 2, 00:11:42.244 "base_bdevs_list": [ 00:11:42.244 { 00:11:42.244 "name": "BaseBdev1", 00:11:42.244 "uuid": "e096ebd3-e7fe-4852-9239-2e9b8302d0df", 00:11:42.244 "is_configured": true, 00:11:42.244 "data_offset": 0, 00:11:42.244 "data_size": 65536 00:11:42.244 }, 00:11:42.244 { 00:11:42.244 "name": "BaseBdev2", 00:11:42.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.244 "is_configured": false, 00:11:42.244 "data_offset": 0, 00:11:42.244 "data_size": 0 00:11:42.244 } 00:11:42.244 ] 00:11:42.244 }' 00:11:42.244 06:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.244 06:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.811 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:42.811 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.811 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.811 [2024-12-06 06:38:01.191248] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:42.811 [2024-12-06 06:38:01.191315] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:42.811 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.811 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:42.811 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.811 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.812 [2024-12-06 06:38:01.203296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:42.812 [2024-12-06 06:38:01.205964] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:42.812 [2024-12-06 06:38:01.206160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:42.812 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.812 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:42.812 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:42.812 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:42.812 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.812 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.812 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:42.812 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.812 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:42.812 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.812 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.812 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.812 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.812 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.812 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.812 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.812 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.812 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.812 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.812 "name": "Existed_Raid", 00:11:42.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.812 "strip_size_kb": 64, 00:11:42.812 "state": "configuring", 00:11:42.812 "raid_level": "raid0", 00:11:42.812 "superblock": false, 00:11:42.812 "num_base_bdevs": 2, 00:11:42.812 "num_base_bdevs_discovered": 1, 00:11:42.812 "num_base_bdevs_operational": 2, 00:11:42.812 "base_bdevs_list": [ 00:11:42.812 { 00:11:42.812 "name": "BaseBdev1", 00:11:42.812 "uuid": "e096ebd3-e7fe-4852-9239-2e9b8302d0df", 00:11:42.812 "is_configured": true, 00:11:42.812 "data_offset": 0, 00:11:42.812 "data_size": 65536 00:11:42.812 }, 00:11:42.812 { 00:11:42.812 "name": "BaseBdev2", 00:11:42.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.812 "is_configured": false, 00:11:42.812 "data_offset": 0, 00:11:42.812 "data_size": 0 00:11:42.812 } 00:11:42.812 ] 00:11:42.812 }' 00:11:42.812 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.812 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.071 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:43.071 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.071 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.330 [2024-12-06 06:38:01.747804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:43.330 [2024-12-06 06:38:01.747881] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:43.330 [2024-12-06 06:38:01.747896] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:43.330 [2024-12-06 06:38:01.748278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:43.330 [2024-12-06 06:38:01.748519] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:43.330 [2024-12-06 06:38:01.748541] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:43.330 [2024-12-06 06:38:01.748890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.330 BaseBdev2 00:11:43.330 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.330 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:43.330 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:43.330 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:43.330 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:43.330 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:43.330 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:43.330 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:43.330 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.330 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.330 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.330 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:43.330 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.330 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.330 [ 00:11:43.330 { 00:11:43.330 "name": "BaseBdev2", 00:11:43.330 "aliases": [ 00:11:43.330 "af879437-e147-4f53-be2f-eee297d7a2b0" 00:11:43.330 ], 00:11:43.330 "product_name": "Malloc disk", 00:11:43.330 "block_size": 512, 00:11:43.330 "num_blocks": 65536, 00:11:43.330 "uuid": "af879437-e147-4f53-be2f-eee297d7a2b0", 00:11:43.330 "assigned_rate_limits": { 00:11:43.330 "rw_ios_per_sec": 0, 00:11:43.331 "rw_mbytes_per_sec": 0, 00:11:43.331 "r_mbytes_per_sec": 0, 00:11:43.331 "w_mbytes_per_sec": 0 00:11:43.331 }, 00:11:43.331 "claimed": true, 00:11:43.331 "claim_type": "exclusive_write", 00:11:43.331 "zoned": false, 00:11:43.331 "supported_io_types": { 00:11:43.331 "read": true, 00:11:43.331 "write": true, 00:11:43.331 "unmap": true, 00:11:43.331 "flush": true, 00:11:43.331 "reset": true, 00:11:43.331 "nvme_admin": false, 00:11:43.331 "nvme_io": false, 00:11:43.331 "nvme_io_md": false, 00:11:43.331 "write_zeroes": true, 00:11:43.331 "zcopy": true, 00:11:43.331 "get_zone_info": false, 00:11:43.331 "zone_management": false, 00:11:43.331 "zone_append": false, 00:11:43.331 "compare": false, 00:11:43.331 "compare_and_write": false, 00:11:43.331 "abort": true, 00:11:43.331 "seek_hole": false, 00:11:43.331 "seek_data": false, 00:11:43.331 "copy": true, 00:11:43.331 "nvme_iov_md": false 00:11:43.331 }, 00:11:43.331 "memory_domains": [ 00:11:43.331 { 00:11:43.331 "dma_device_id": "system", 00:11:43.331 "dma_device_type": 1 00:11:43.331 }, 00:11:43.331 { 00:11:43.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.331 "dma_device_type": 2 00:11:43.331 } 00:11:43.331 ], 00:11:43.331 "driver_specific": {} 00:11:43.331 } 00:11:43.331 ] 00:11:43.331 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.331 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:43.331 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:43.331 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:43.331 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:11:43.331 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.331 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.331 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:43.331 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:43.331 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:43.331 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.331 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.331 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.331 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.331 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.331 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.331 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.331 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.331 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.331 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.331 "name": "Existed_Raid", 00:11:43.331 "uuid": "721f1b94-98aa-42a0-bdc8-9b5927706d38", 00:11:43.331 "strip_size_kb": 64, 00:11:43.331 "state": "online", 00:11:43.331 "raid_level": "raid0", 00:11:43.331 "superblock": false, 00:11:43.331 "num_base_bdevs": 2, 00:11:43.331 "num_base_bdevs_discovered": 2, 00:11:43.331 "num_base_bdevs_operational": 2, 00:11:43.331 "base_bdevs_list": [ 00:11:43.331 { 00:11:43.331 "name": "BaseBdev1", 00:11:43.331 "uuid": "e096ebd3-e7fe-4852-9239-2e9b8302d0df", 00:11:43.331 "is_configured": true, 00:11:43.331 "data_offset": 0, 00:11:43.331 "data_size": 65536 00:11:43.331 }, 00:11:43.331 { 00:11:43.331 "name": "BaseBdev2", 00:11:43.331 "uuid": "af879437-e147-4f53-be2f-eee297d7a2b0", 00:11:43.331 "is_configured": true, 00:11:43.331 "data_offset": 0, 00:11:43.331 "data_size": 65536 00:11:43.331 } 00:11:43.331 ] 00:11:43.331 }' 00:11:43.331 06:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.331 06:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.898 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:43.898 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:43.898 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:43.898 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:43.898 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:43.898 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:43.898 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:43.898 06:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.898 06:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.898 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:43.898 [2024-12-06 06:38:02.336409] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:43.898 06:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.898 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:43.898 "name": "Existed_Raid", 00:11:43.898 "aliases": [ 00:11:43.898 "721f1b94-98aa-42a0-bdc8-9b5927706d38" 00:11:43.898 ], 00:11:43.898 "product_name": "Raid Volume", 00:11:43.898 "block_size": 512, 00:11:43.898 "num_blocks": 131072, 00:11:43.898 "uuid": "721f1b94-98aa-42a0-bdc8-9b5927706d38", 00:11:43.898 "assigned_rate_limits": { 00:11:43.898 "rw_ios_per_sec": 0, 00:11:43.898 "rw_mbytes_per_sec": 0, 00:11:43.898 "r_mbytes_per_sec": 0, 00:11:43.898 "w_mbytes_per_sec": 0 00:11:43.898 }, 00:11:43.898 "claimed": false, 00:11:43.898 "zoned": false, 00:11:43.898 "supported_io_types": { 00:11:43.898 "read": true, 00:11:43.898 "write": true, 00:11:43.898 "unmap": true, 00:11:43.898 "flush": true, 00:11:43.898 "reset": true, 00:11:43.898 "nvme_admin": false, 00:11:43.898 "nvme_io": false, 00:11:43.898 "nvme_io_md": false, 00:11:43.898 "write_zeroes": true, 00:11:43.898 "zcopy": false, 00:11:43.898 "get_zone_info": false, 00:11:43.898 "zone_management": false, 00:11:43.898 "zone_append": false, 00:11:43.898 "compare": false, 00:11:43.898 "compare_and_write": false, 00:11:43.898 "abort": false, 00:11:43.898 "seek_hole": false, 00:11:43.898 "seek_data": false, 00:11:43.898 "copy": false, 00:11:43.898 "nvme_iov_md": false 00:11:43.898 }, 00:11:43.898 "memory_domains": [ 00:11:43.898 { 00:11:43.898 "dma_device_id": "system", 00:11:43.898 "dma_device_type": 1 00:11:43.898 }, 00:11:43.898 { 00:11:43.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.898 "dma_device_type": 2 00:11:43.898 }, 00:11:43.898 { 00:11:43.898 "dma_device_id": "system", 00:11:43.898 "dma_device_type": 1 00:11:43.898 }, 00:11:43.898 { 00:11:43.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.898 "dma_device_type": 2 00:11:43.898 } 00:11:43.898 ], 00:11:43.898 "driver_specific": { 00:11:43.898 "raid": { 00:11:43.898 "uuid": "721f1b94-98aa-42a0-bdc8-9b5927706d38", 00:11:43.898 "strip_size_kb": 64, 00:11:43.898 "state": "online", 00:11:43.898 "raid_level": "raid0", 00:11:43.898 "superblock": false, 00:11:43.898 "num_base_bdevs": 2, 00:11:43.898 "num_base_bdevs_discovered": 2, 00:11:43.898 "num_base_bdevs_operational": 2, 00:11:43.898 "base_bdevs_list": [ 00:11:43.898 { 00:11:43.898 "name": "BaseBdev1", 00:11:43.898 "uuid": "e096ebd3-e7fe-4852-9239-2e9b8302d0df", 00:11:43.898 "is_configured": true, 00:11:43.898 "data_offset": 0, 00:11:43.898 "data_size": 65536 00:11:43.898 }, 00:11:43.898 { 00:11:43.898 "name": "BaseBdev2", 00:11:43.898 "uuid": "af879437-e147-4f53-be2f-eee297d7a2b0", 00:11:43.898 "is_configured": true, 00:11:43.898 "data_offset": 0, 00:11:43.898 "data_size": 65536 00:11:43.898 } 00:11:43.898 ] 00:11:43.898 } 00:11:43.898 } 00:11:43.898 }' 00:11:43.898 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:43.898 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:43.898 BaseBdev2' 00:11:43.898 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.898 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:43.898 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.898 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.898 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:43.898 06:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.898 06:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.898 06:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.898 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.898 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.898 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.898 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:43.898 06:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.898 06:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.898 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.898 06:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.157 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.157 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.157 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:44.157 06:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.157 06:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.157 [2024-12-06 06:38:02.612409] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:44.158 [2024-12-06 06:38:02.612456] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:44.158 [2024-12-06 06:38:02.612540] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:44.158 06:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.158 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:44.158 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:44.158 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:44.158 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:44.158 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:44.158 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:11:44.158 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.158 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:44.158 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:44.158 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:44.158 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:44.158 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.158 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.158 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.158 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.158 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.158 06:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.158 06:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.158 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.158 06:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.158 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.158 "name": "Existed_Raid", 00:11:44.158 "uuid": "721f1b94-98aa-42a0-bdc8-9b5927706d38", 00:11:44.158 "strip_size_kb": 64, 00:11:44.158 "state": "offline", 00:11:44.158 "raid_level": "raid0", 00:11:44.158 "superblock": false, 00:11:44.158 "num_base_bdevs": 2, 00:11:44.158 "num_base_bdevs_discovered": 1, 00:11:44.158 "num_base_bdevs_operational": 1, 00:11:44.158 "base_bdevs_list": [ 00:11:44.158 { 00:11:44.158 "name": null, 00:11:44.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.158 "is_configured": false, 00:11:44.158 "data_offset": 0, 00:11:44.158 "data_size": 65536 00:11:44.158 }, 00:11:44.158 { 00:11:44.158 "name": "BaseBdev2", 00:11:44.158 "uuid": "af879437-e147-4f53-be2f-eee297d7a2b0", 00:11:44.158 "is_configured": true, 00:11:44.158 "data_offset": 0, 00:11:44.158 "data_size": 65536 00:11:44.158 } 00:11:44.158 ] 00:11:44.158 }' 00:11:44.158 06:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.158 06:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.726 06:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:44.726 06:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:44.726 06:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:44.726 06:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.726 06:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.726 06:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.726 06:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.726 06:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:44.726 06:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:44.726 06:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:44.726 06:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.726 06:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.726 [2024-12-06 06:38:03.285289] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:44.726 [2024-12-06 06:38:03.285364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:44.985 06:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.985 06:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:44.985 06:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:44.985 06:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.985 06:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:44.985 06:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.985 06:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.985 06:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.985 06:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:44.985 06:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:44.985 06:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:44.985 06:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60828 00:11:44.985 06:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60828 ']' 00:11:44.985 06:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60828 00:11:44.985 06:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:44.985 06:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:44.985 06:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60828 00:11:44.985 killing process with pid 60828 00:11:44.985 06:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:44.985 06:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:44.985 06:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60828' 00:11:44.985 06:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60828 00:11:44.985 [2024-12-06 06:38:03.468206] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:44.985 06:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60828 00:11:44.985 [2024-12-06 06:38:03.483713] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:45.923 06:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:45.923 ************************************ 00:11:45.923 END TEST raid_state_function_test 00:11:45.923 ************************************ 00:11:45.923 00:11:45.923 real 0m5.662s 00:11:45.923 user 0m8.582s 00:11:45.923 sys 0m0.781s 00:11:45.923 06:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.923 06:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.180 06:38:04 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:11:46.180 06:38:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:46.180 06:38:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.180 06:38:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:46.181 ************************************ 00:11:46.181 START TEST raid_state_function_test_sb 00:11:46.181 ************************************ 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:46.181 Process raid pid: 61082 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61082 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61082' 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61082 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61082 ']' 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:46.181 06:38:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.181 [2024-12-06 06:38:04.723901] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:11:46.181 [2024-12-06 06:38:04.724264] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.438 [2024-12-06 06:38:04.902885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.438 [2024-12-06 06:38:05.036343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.697 [2024-12-06 06:38:05.242888] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.697 [2024-12-06 06:38:05.242934] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:47.263 06:38:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:47.263 06:38:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:47.263 06:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:47.263 06:38:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.263 06:38:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.263 [2024-12-06 06:38:05.687496] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:47.263 [2024-12-06 06:38:05.687594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:47.263 [2024-12-06 06:38:05.687613] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:47.264 [2024-12-06 06:38:05.687631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:47.264 06:38:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.264 06:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:47.264 06:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.264 06:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.264 06:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:47.264 06:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.264 06:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:47.264 06:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.264 06:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.264 06:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.264 06:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.264 06:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.264 06:38:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.264 06:38:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.264 06:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.264 06:38:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.264 06:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.264 "name": "Existed_Raid", 00:11:47.264 "uuid": "daa9325a-d0f0-44ee-ac02-3c97c6da1372", 00:11:47.264 "strip_size_kb": 64, 00:11:47.264 "state": "configuring", 00:11:47.264 "raid_level": "raid0", 00:11:47.264 "superblock": true, 00:11:47.264 "num_base_bdevs": 2, 00:11:47.264 "num_base_bdevs_discovered": 0, 00:11:47.264 "num_base_bdevs_operational": 2, 00:11:47.264 "base_bdevs_list": [ 00:11:47.264 { 00:11:47.264 "name": "BaseBdev1", 00:11:47.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.264 "is_configured": false, 00:11:47.264 "data_offset": 0, 00:11:47.264 "data_size": 0 00:11:47.264 }, 00:11:47.264 { 00:11:47.264 "name": "BaseBdev2", 00:11:47.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.264 "is_configured": false, 00:11:47.264 "data_offset": 0, 00:11:47.264 "data_size": 0 00:11:47.264 } 00:11:47.264 ] 00:11:47.264 }' 00:11:47.264 06:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.264 06:38:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.830 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:47.830 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.830 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.830 [2024-12-06 06:38:06.191611] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:47.830 [2024-12-06 06:38:06.191656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:47.830 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.830 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:47.830 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.830 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.830 [2024-12-06 06:38:06.199603] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:47.830 [2024-12-06 06:38:06.199657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:47.830 [2024-12-06 06:38:06.199673] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:47.830 [2024-12-06 06:38:06.199693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:47.830 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.830 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:47.830 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.830 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.830 [2024-12-06 06:38:06.245250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:47.830 BaseBdev1 00:11:47.830 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.830 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:47.830 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:47.830 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:47.830 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:47.830 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:47.830 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:47.830 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:47.830 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.830 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.830 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.830 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:47.830 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.830 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.830 [ 00:11:47.830 { 00:11:47.830 "name": "BaseBdev1", 00:11:47.830 "aliases": [ 00:11:47.830 "07621c39-de5c-43bd-b293-472f7aefb13a" 00:11:47.830 ], 00:11:47.830 "product_name": "Malloc disk", 00:11:47.830 "block_size": 512, 00:11:47.830 "num_blocks": 65536, 00:11:47.830 "uuid": "07621c39-de5c-43bd-b293-472f7aefb13a", 00:11:47.830 "assigned_rate_limits": { 00:11:47.830 "rw_ios_per_sec": 0, 00:11:47.830 "rw_mbytes_per_sec": 0, 00:11:47.830 "r_mbytes_per_sec": 0, 00:11:47.830 "w_mbytes_per_sec": 0 00:11:47.830 }, 00:11:47.830 "claimed": true, 00:11:47.830 "claim_type": "exclusive_write", 00:11:47.830 "zoned": false, 00:11:47.830 "supported_io_types": { 00:11:47.830 "read": true, 00:11:47.830 "write": true, 00:11:47.830 "unmap": true, 00:11:47.830 "flush": true, 00:11:47.830 "reset": true, 00:11:47.830 "nvme_admin": false, 00:11:47.830 "nvme_io": false, 00:11:47.830 "nvme_io_md": false, 00:11:47.830 "write_zeroes": true, 00:11:47.830 "zcopy": true, 00:11:47.830 "get_zone_info": false, 00:11:47.830 "zone_management": false, 00:11:47.830 "zone_append": false, 00:11:47.830 "compare": false, 00:11:47.830 "compare_and_write": false, 00:11:47.830 "abort": true, 00:11:47.830 "seek_hole": false, 00:11:47.830 "seek_data": false, 00:11:47.830 "copy": true, 00:11:47.830 "nvme_iov_md": false 00:11:47.830 }, 00:11:47.830 "memory_domains": [ 00:11:47.830 { 00:11:47.830 "dma_device_id": "system", 00:11:47.830 "dma_device_type": 1 00:11:47.831 }, 00:11:47.831 { 00:11:47.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.831 "dma_device_type": 2 00:11:47.831 } 00:11:47.831 ], 00:11:47.831 "driver_specific": {} 00:11:47.831 } 00:11:47.831 ] 00:11:47.831 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.831 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:47.831 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:47.831 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.831 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.831 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:47.831 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.831 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:47.831 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.831 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.831 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.831 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.831 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.831 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.831 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.831 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.831 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.831 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.831 "name": "Existed_Raid", 00:11:47.831 "uuid": "f7200184-78db-43fc-869e-15e6214a10db", 00:11:47.831 "strip_size_kb": 64, 00:11:47.831 "state": "configuring", 00:11:47.831 "raid_level": "raid0", 00:11:47.831 "superblock": true, 00:11:47.831 "num_base_bdevs": 2, 00:11:47.831 "num_base_bdevs_discovered": 1, 00:11:47.831 "num_base_bdevs_operational": 2, 00:11:47.831 "base_bdevs_list": [ 00:11:47.831 { 00:11:47.831 "name": "BaseBdev1", 00:11:47.831 "uuid": "07621c39-de5c-43bd-b293-472f7aefb13a", 00:11:47.831 "is_configured": true, 00:11:47.831 "data_offset": 2048, 00:11:47.831 "data_size": 63488 00:11:47.831 }, 00:11:47.831 { 00:11:47.831 "name": "BaseBdev2", 00:11:47.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.831 "is_configured": false, 00:11:47.831 "data_offset": 0, 00:11:47.831 "data_size": 0 00:11:47.831 } 00:11:47.831 ] 00:11:47.831 }' 00:11:47.831 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.831 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.398 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:48.398 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.398 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.398 [2024-12-06 06:38:06.801460] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:48.398 [2024-12-06 06:38:06.801691] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:48.398 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.398 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:48.398 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.398 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.398 [2024-12-06 06:38:06.809509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:48.398 [2024-12-06 06:38:06.811981] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:48.398 [2024-12-06 06:38:06.812039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:48.398 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.398 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:48.398 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:48.398 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:48.398 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.398 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.398 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:48.398 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.398 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:48.398 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.398 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.398 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.398 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.398 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.398 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.398 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.398 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.398 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.398 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.398 "name": "Existed_Raid", 00:11:48.398 "uuid": "73d8e298-30a2-4453-816e-a84dcab287c7", 00:11:48.398 "strip_size_kb": 64, 00:11:48.398 "state": "configuring", 00:11:48.398 "raid_level": "raid0", 00:11:48.398 "superblock": true, 00:11:48.398 "num_base_bdevs": 2, 00:11:48.398 "num_base_bdevs_discovered": 1, 00:11:48.398 "num_base_bdevs_operational": 2, 00:11:48.398 "base_bdevs_list": [ 00:11:48.398 { 00:11:48.398 "name": "BaseBdev1", 00:11:48.398 "uuid": "07621c39-de5c-43bd-b293-472f7aefb13a", 00:11:48.398 "is_configured": true, 00:11:48.398 "data_offset": 2048, 00:11:48.398 "data_size": 63488 00:11:48.398 }, 00:11:48.398 { 00:11:48.398 "name": "BaseBdev2", 00:11:48.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.398 "is_configured": false, 00:11:48.398 "data_offset": 0, 00:11:48.398 "data_size": 0 00:11:48.398 } 00:11:48.398 ] 00:11:48.398 }' 00:11:48.398 06:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.398 06:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.966 06:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:48.966 06:38:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.966 06:38:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.966 [2024-12-06 06:38:07.396735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:48.966 [2024-12-06 06:38:07.397286] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:48.966 [2024-12-06 06:38:07.397433] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:48.966 BaseBdev2 00:11:48.966 [2024-12-06 06:38:07.397838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:48.966 [2024-12-06 06:38:07.398048] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:48.966 [2024-12-06 06:38:07.398073] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:48.966 [2024-12-06 06:38:07.398256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.966 06:38:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.966 06:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:48.966 06:38:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:48.966 06:38:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:48.966 06:38:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:48.966 06:38:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:48.966 06:38:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:48.966 06:38:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:48.966 06:38:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.966 06:38:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.966 06:38:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.966 06:38:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:48.966 06:38:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.966 06:38:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.966 [ 00:11:48.966 { 00:11:48.966 "name": "BaseBdev2", 00:11:48.966 "aliases": [ 00:11:48.966 "6e9f4b70-f3bf-434e-a02e-fd758a2d7f30" 00:11:48.966 ], 00:11:48.966 "product_name": "Malloc disk", 00:11:48.966 "block_size": 512, 00:11:48.966 "num_blocks": 65536, 00:11:48.966 "uuid": "6e9f4b70-f3bf-434e-a02e-fd758a2d7f30", 00:11:48.966 "assigned_rate_limits": { 00:11:48.966 "rw_ios_per_sec": 0, 00:11:48.966 "rw_mbytes_per_sec": 0, 00:11:48.966 "r_mbytes_per_sec": 0, 00:11:48.966 "w_mbytes_per_sec": 0 00:11:48.966 }, 00:11:48.966 "claimed": true, 00:11:48.966 "claim_type": "exclusive_write", 00:11:48.966 "zoned": false, 00:11:48.966 "supported_io_types": { 00:11:48.966 "read": true, 00:11:48.966 "write": true, 00:11:48.966 "unmap": true, 00:11:48.966 "flush": true, 00:11:48.966 "reset": true, 00:11:48.966 "nvme_admin": false, 00:11:48.966 "nvme_io": false, 00:11:48.966 "nvme_io_md": false, 00:11:48.966 "write_zeroes": true, 00:11:48.966 "zcopy": true, 00:11:48.966 "get_zone_info": false, 00:11:48.966 "zone_management": false, 00:11:48.966 "zone_append": false, 00:11:48.966 "compare": false, 00:11:48.966 "compare_and_write": false, 00:11:48.966 "abort": true, 00:11:48.966 "seek_hole": false, 00:11:48.966 "seek_data": false, 00:11:48.966 "copy": true, 00:11:48.966 "nvme_iov_md": false 00:11:48.966 }, 00:11:48.966 "memory_domains": [ 00:11:48.966 { 00:11:48.966 "dma_device_id": "system", 00:11:48.966 "dma_device_type": 1 00:11:48.966 }, 00:11:48.966 { 00:11:48.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.966 "dma_device_type": 2 00:11:48.966 } 00:11:48.966 ], 00:11:48.966 "driver_specific": {} 00:11:48.966 } 00:11:48.966 ] 00:11:48.967 06:38:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.967 06:38:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:48.967 06:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:48.967 06:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:48.967 06:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:11:48.967 06:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.967 06:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.967 06:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:48.967 06:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.967 06:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:48.967 06:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.967 06:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.967 06:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.967 06:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.967 06:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.967 06:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.967 06:38:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.967 06:38:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.967 06:38:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.967 06:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.967 "name": "Existed_Raid", 00:11:48.967 "uuid": "73d8e298-30a2-4453-816e-a84dcab287c7", 00:11:48.967 "strip_size_kb": 64, 00:11:48.967 "state": "online", 00:11:48.967 "raid_level": "raid0", 00:11:48.967 "superblock": true, 00:11:48.967 "num_base_bdevs": 2, 00:11:48.967 "num_base_bdevs_discovered": 2, 00:11:48.967 "num_base_bdevs_operational": 2, 00:11:48.967 "base_bdevs_list": [ 00:11:48.967 { 00:11:48.967 "name": "BaseBdev1", 00:11:48.967 "uuid": "07621c39-de5c-43bd-b293-472f7aefb13a", 00:11:48.967 "is_configured": true, 00:11:48.967 "data_offset": 2048, 00:11:48.967 "data_size": 63488 00:11:48.967 }, 00:11:48.967 { 00:11:48.967 "name": "BaseBdev2", 00:11:48.967 "uuid": "6e9f4b70-f3bf-434e-a02e-fd758a2d7f30", 00:11:48.967 "is_configured": true, 00:11:48.967 "data_offset": 2048, 00:11:48.967 "data_size": 63488 00:11:48.967 } 00:11:48.967 ] 00:11:48.967 }' 00:11:48.967 06:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.967 06:38:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.534 06:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:49.534 06:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:49.534 06:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:49.534 06:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:49.534 06:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:49.534 06:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:49.534 06:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:49.534 06:38:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.534 06:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:49.534 06:38:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.534 [2024-12-06 06:38:07.953313] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:49.534 06:38:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.534 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:49.534 "name": "Existed_Raid", 00:11:49.534 "aliases": [ 00:11:49.534 "73d8e298-30a2-4453-816e-a84dcab287c7" 00:11:49.534 ], 00:11:49.534 "product_name": "Raid Volume", 00:11:49.534 "block_size": 512, 00:11:49.534 "num_blocks": 126976, 00:11:49.534 "uuid": "73d8e298-30a2-4453-816e-a84dcab287c7", 00:11:49.534 "assigned_rate_limits": { 00:11:49.534 "rw_ios_per_sec": 0, 00:11:49.534 "rw_mbytes_per_sec": 0, 00:11:49.534 "r_mbytes_per_sec": 0, 00:11:49.534 "w_mbytes_per_sec": 0 00:11:49.534 }, 00:11:49.534 "claimed": false, 00:11:49.534 "zoned": false, 00:11:49.534 "supported_io_types": { 00:11:49.534 "read": true, 00:11:49.534 "write": true, 00:11:49.534 "unmap": true, 00:11:49.534 "flush": true, 00:11:49.534 "reset": true, 00:11:49.534 "nvme_admin": false, 00:11:49.534 "nvme_io": false, 00:11:49.534 "nvme_io_md": false, 00:11:49.534 "write_zeroes": true, 00:11:49.534 "zcopy": false, 00:11:49.534 "get_zone_info": false, 00:11:49.534 "zone_management": false, 00:11:49.534 "zone_append": false, 00:11:49.534 "compare": false, 00:11:49.534 "compare_and_write": false, 00:11:49.534 "abort": false, 00:11:49.534 "seek_hole": false, 00:11:49.534 "seek_data": false, 00:11:49.534 "copy": false, 00:11:49.534 "nvme_iov_md": false 00:11:49.534 }, 00:11:49.534 "memory_domains": [ 00:11:49.534 { 00:11:49.534 "dma_device_id": "system", 00:11:49.534 "dma_device_type": 1 00:11:49.534 }, 00:11:49.534 { 00:11:49.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.534 "dma_device_type": 2 00:11:49.534 }, 00:11:49.534 { 00:11:49.534 "dma_device_id": "system", 00:11:49.534 "dma_device_type": 1 00:11:49.534 }, 00:11:49.534 { 00:11:49.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.534 "dma_device_type": 2 00:11:49.534 } 00:11:49.534 ], 00:11:49.534 "driver_specific": { 00:11:49.534 "raid": { 00:11:49.534 "uuid": "73d8e298-30a2-4453-816e-a84dcab287c7", 00:11:49.534 "strip_size_kb": 64, 00:11:49.534 "state": "online", 00:11:49.534 "raid_level": "raid0", 00:11:49.534 "superblock": true, 00:11:49.534 "num_base_bdevs": 2, 00:11:49.534 "num_base_bdevs_discovered": 2, 00:11:49.534 "num_base_bdevs_operational": 2, 00:11:49.534 "base_bdevs_list": [ 00:11:49.534 { 00:11:49.534 "name": "BaseBdev1", 00:11:49.534 "uuid": "07621c39-de5c-43bd-b293-472f7aefb13a", 00:11:49.534 "is_configured": true, 00:11:49.534 "data_offset": 2048, 00:11:49.534 "data_size": 63488 00:11:49.534 }, 00:11:49.534 { 00:11:49.534 "name": "BaseBdev2", 00:11:49.534 "uuid": "6e9f4b70-f3bf-434e-a02e-fd758a2d7f30", 00:11:49.534 "is_configured": true, 00:11:49.534 "data_offset": 2048, 00:11:49.534 "data_size": 63488 00:11:49.534 } 00:11:49.534 ] 00:11:49.534 } 00:11:49.534 } 00:11:49.534 }' 00:11:49.534 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:49.534 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:49.534 BaseBdev2' 00:11:49.534 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.534 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:49.534 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.534 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:49.534 06:38:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.534 06:38:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.534 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.534 06:38:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.534 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.534 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.534 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.534 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:49.534 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.534 06:38:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.534 06:38:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.793 06:38:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.793 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.793 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.793 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:49.793 06:38:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.793 06:38:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.793 [2024-12-06 06:38:08.221067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:49.793 [2024-12-06 06:38:08.221113] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:49.793 [2024-12-06 06:38:08.221194] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:49.793 06:38:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.793 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:49.793 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:49.793 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:49.793 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:49.793 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:49.793 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:11:49.793 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.793 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:49.793 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:49.793 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:49.793 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:49.793 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.793 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.793 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.793 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.793 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.793 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.793 06:38:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.793 06:38:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.793 06:38:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.793 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.793 "name": "Existed_Raid", 00:11:49.793 "uuid": "73d8e298-30a2-4453-816e-a84dcab287c7", 00:11:49.793 "strip_size_kb": 64, 00:11:49.793 "state": "offline", 00:11:49.793 "raid_level": "raid0", 00:11:49.793 "superblock": true, 00:11:49.793 "num_base_bdevs": 2, 00:11:49.793 "num_base_bdevs_discovered": 1, 00:11:49.793 "num_base_bdevs_operational": 1, 00:11:49.793 "base_bdevs_list": [ 00:11:49.793 { 00:11:49.793 "name": null, 00:11:49.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.793 "is_configured": false, 00:11:49.793 "data_offset": 0, 00:11:49.793 "data_size": 63488 00:11:49.793 }, 00:11:49.793 { 00:11:49.793 "name": "BaseBdev2", 00:11:49.793 "uuid": "6e9f4b70-f3bf-434e-a02e-fd758a2d7f30", 00:11:49.793 "is_configured": true, 00:11:49.793 "data_offset": 2048, 00:11:49.793 "data_size": 63488 00:11:49.793 } 00:11:49.793 ] 00:11:49.793 }' 00:11:49.793 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.793 06:38:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.360 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:50.360 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:50.360 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:50.360 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.360 06:38:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.360 06:38:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.360 06:38:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.360 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:50.360 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:50.360 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:50.360 06:38:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.360 06:38:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.360 [2024-12-06 06:38:08.841135] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:50.360 [2024-12-06 06:38:08.841220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:50.360 06:38:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.360 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:50.360 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:50.360 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.360 06:38:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.360 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:50.360 06:38:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.360 06:38:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.360 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:50.360 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:50.360 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:50.360 06:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61082 00:11:50.360 06:38:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61082 ']' 00:11:50.360 06:38:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61082 00:11:50.360 06:38:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:50.360 06:38:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:50.360 06:38:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61082 00:11:50.619 killing process with pid 61082 00:11:50.619 06:38:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:50.619 06:38:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:50.619 06:38:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61082' 00:11:50.619 06:38:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61082 00:11:50.619 [2024-12-06 06:38:09.019713] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:50.619 06:38:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61082 00:11:50.619 [2024-12-06 06:38:09.034493] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:51.554 06:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:51.554 00:11:51.554 real 0m5.474s 00:11:51.554 user 0m8.281s 00:11:51.554 sys 0m0.738s 00:11:51.554 06:38:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.554 06:38:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.554 ************************************ 00:11:51.554 END TEST raid_state_function_test_sb 00:11:51.554 ************************************ 00:11:51.554 06:38:10 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:11:51.554 06:38:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:51.554 06:38:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.554 06:38:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:51.554 ************************************ 00:11:51.554 START TEST raid_superblock_test 00:11:51.554 ************************************ 00:11:51.554 06:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:11:51.554 06:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:51.554 06:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:11:51.554 06:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:51.554 06:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:51.554 06:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:51.554 06:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:51.554 06:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:51.554 06:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:51.554 06:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:51.554 06:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:51.554 06:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:51.554 06:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:51.554 06:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:51.554 06:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:51.554 06:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:51.554 06:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:51.554 06:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61344 00:11:51.554 06:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:51.554 06:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61344 00:11:51.554 06:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61344 ']' 00:11:51.554 06:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.554 06:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:51.554 06:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.554 06:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:51.554 06:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.813 [2024-12-06 06:38:10.235978] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:11:51.813 [2024-12-06 06:38:10.236374] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61344 ] 00:11:51.813 [2024-12-06 06:38:10.412798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.072 [2024-12-06 06:38:10.548074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.331 [2024-12-06 06:38:10.757541] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:52.331 [2024-12-06 06:38:10.757631] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:52.589 06:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:52.589 06:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:52.589 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:52.589 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:52.589 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:52.589 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:52.589 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:52.589 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:52.589 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:52.589 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:52.589 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:52.589 06:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.589 06:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.935 malloc1 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.935 [2024-12-06 06:38:11.265228] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:52.935 [2024-12-06 06:38:11.265459] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.935 [2024-12-06 06:38:11.265560] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:52.935 [2024-12-06 06:38:11.265790] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.935 [2024-12-06 06:38:11.268836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.935 [2024-12-06 06:38:11.269010] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:52.935 pt1 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.935 malloc2 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.935 [2024-12-06 06:38:11.322588] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:52.935 [2024-12-06 06:38:11.322792] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.935 [2024-12-06 06:38:11.322878] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:52.935 [2024-12-06 06:38:11.323133] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.935 [2024-12-06 06:38:11.326067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.935 [2024-12-06 06:38:11.326228] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:52.935 pt2 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.935 [2024-12-06 06:38:11.330693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:52.935 [2024-12-06 06:38:11.333138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:52.935 [2024-12-06 06:38:11.333501] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:52.935 [2024-12-06 06:38:11.333542] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:52.935 [2024-12-06 06:38:11.333889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:52.935 [2024-12-06 06:38:11.334095] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:52.935 [2024-12-06 06:38:11.334116] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:52.935 [2024-12-06 06:38:11.334310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.935 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.935 "name": "raid_bdev1", 00:11:52.935 "uuid": "f49ebf00-75bc-41e0-808a-bc070d1bd60d", 00:11:52.935 "strip_size_kb": 64, 00:11:52.935 "state": "online", 00:11:52.935 "raid_level": "raid0", 00:11:52.935 "superblock": true, 00:11:52.935 "num_base_bdevs": 2, 00:11:52.935 "num_base_bdevs_discovered": 2, 00:11:52.935 "num_base_bdevs_operational": 2, 00:11:52.935 "base_bdevs_list": [ 00:11:52.935 { 00:11:52.935 "name": "pt1", 00:11:52.935 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:52.935 "is_configured": true, 00:11:52.935 "data_offset": 2048, 00:11:52.935 "data_size": 63488 00:11:52.935 }, 00:11:52.935 { 00:11:52.935 "name": "pt2", 00:11:52.936 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:52.936 "is_configured": true, 00:11:52.936 "data_offset": 2048, 00:11:52.936 "data_size": 63488 00:11:52.936 } 00:11:52.936 ] 00:11:52.936 }' 00:11:52.936 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.936 06:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.209 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:53.209 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:53.209 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:53.209 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:53.209 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:53.209 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:53.209 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:53.209 06:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.209 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:53.209 06:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.209 [2024-12-06 06:38:11.835161] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:53.209 06:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.469 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:53.469 "name": "raid_bdev1", 00:11:53.469 "aliases": [ 00:11:53.469 "f49ebf00-75bc-41e0-808a-bc070d1bd60d" 00:11:53.469 ], 00:11:53.469 "product_name": "Raid Volume", 00:11:53.469 "block_size": 512, 00:11:53.469 "num_blocks": 126976, 00:11:53.469 "uuid": "f49ebf00-75bc-41e0-808a-bc070d1bd60d", 00:11:53.469 "assigned_rate_limits": { 00:11:53.469 "rw_ios_per_sec": 0, 00:11:53.469 "rw_mbytes_per_sec": 0, 00:11:53.469 "r_mbytes_per_sec": 0, 00:11:53.469 "w_mbytes_per_sec": 0 00:11:53.469 }, 00:11:53.469 "claimed": false, 00:11:53.469 "zoned": false, 00:11:53.469 "supported_io_types": { 00:11:53.469 "read": true, 00:11:53.469 "write": true, 00:11:53.469 "unmap": true, 00:11:53.469 "flush": true, 00:11:53.469 "reset": true, 00:11:53.469 "nvme_admin": false, 00:11:53.469 "nvme_io": false, 00:11:53.469 "nvme_io_md": false, 00:11:53.469 "write_zeroes": true, 00:11:53.469 "zcopy": false, 00:11:53.469 "get_zone_info": false, 00:11:53.469 "zone_management": false, 00:11:53.469 "zone_append": false, 00:11:53.469 "compare": false, 00:11:53.469 "compare_and_write": false, 00:11:53.469 "abort": false, 00:11:53.469 "seek_hole": false, 00:11:53.469 "seek_data": false, 00:11:53.469 "copy": false, 00:11:53.469 "nvme_iov_md": false 00:11:53.469 }, 00:11:53.469 "memory_domains": [ 00:11:53.469 { 00:11:53.469 "dma_device_id": "system", 00:11:53.469 "dma_device_type": 1 00:11:53.469 }, 00:11:53.469 { 00:11:53.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.469 "dma_device_type": 2 00:11:53.469 }, 00:11:53.469 { 00:11:53.469 "dma_device_id": "system", 00:11:53.469 "dma_device_type": 1 00:11:53.469 }, 00:11:53.469 { 00:11:53.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.469 "dma_device_type": 2 00:11:53.469 } 00:11:53.469 ], 00:11:53.469 "driver_specific": { 00:11:53.469 "raid": { 00:11:53.469 "uuid": "f49ebf00-75bc-41e0-808a-bc070d1bd60d", 00:11:53.469 "strip_size_kb": 64, 00:11:53.469 "state": "online", 00:11:53.469 "raid_level": "raid0", 00:11:53.469 "superblock": true, 00:11:53.469 "num_base_bdevs": 2, 00:11:53.469 "num_base_bdevs_discovered": 2, 00:11:53.469 "num_base_bdevs_operational": 2, 00:11:53.469 "base_bdevs_list": [ 00:11:53.469 { 00:11:53.469 "name": "pt1", 00:11:53.469 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:53.469 "is_configured": true, 00:11:53.469 "data_offset": 2048, 00:11:53.469 "data_size": 63488 00:11:53.469 }, 00:11:53.469 { 00:11:53.469 "name": "pt2", 00:11:53.469 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:53.469 "is_configured": true, 00:11:53.469 "data_offset": 2048, 00:11:53.469 "data_size": 63488 00:11:53.469 } 00:11:53.469 ] 00:11:53.469 } 00:11:53.469 } 00:11:53.469 }' 00:11:53.469 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:53.469 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:53.469 pt2' 00:11:53.469 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.469 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:53.469 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.469 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.469 06:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:53.469 06:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.469 06:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.469 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.469 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.469 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.469 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.469 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:53.469 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.469 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.469 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.469 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.469 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.469 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.469 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:53.469 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.469 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.469 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:53.469 [2024-12-06 06:38:12.091227] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:53.469 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.729 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f49ebf00-75bc-41e0-808a-bc070d1bd60d 00:11:53.729 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f49ebf00-75bc-41e0-808a-bc070d1bd60d ']' 00:11:53.729 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:53.729 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.729 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.729 [2024-12-06 06:38:12.138871] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:53.729 [2024-12-06 06:38:12.138905] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:53.729 [2024-12-06 06:38:12.139014] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:53.729 [2024-12-06 06:38:12.139080] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:53.729 [2024-12-06 06:38:12.139101] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:53.729 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.729 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:53.729 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.729 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.729 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.729 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.729 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:53.729 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:53.729 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:53.729 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:53.729 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.729 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.729 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.729 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:53.729 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:53.729 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.729 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.730 [2024-12-06 06:38:12.278975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:53.730 [2024-12-06 06:38:12.281517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:53.730 [2024-12-06 06:38:12.281629] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:53.730 [2024-12-06 06:38:12.281706] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:53.730 [2024-12-06 06:38:12.281734] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:53.730 [2024-12-06 06:38:12.281753] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:53.730 request: 00:11:53.730 { 00:11:53.730 "name": "raid_bdev1", 00:11:53.730 "raid_level": "raid0", 00:11:53.730 "base_bdevs": [ 00:11:53.730 "malloc1", 00:11:53.730 "malloc2" 00:11:53.730 ], 00:11:53.730 "strip_size_kb": 64, 00:11:53.730 "superblock": false, 00:11:53.730 "method": "bdev_raid_create", 00:11:53.730 "req_id": 1 00:11:53.730 } 00:11:53.730 Got JSON-RPC error response 00:11:53.730 response: 00:11:53.730 { 00:11:53.730 "code": -17, 00:11:53.730 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:53.730 } 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.730 [2024-12-06 06:38:12.338952] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:53.730 [2024-12-06 06:38:12.339174] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.730 [2024-12-06 06:38:12.339329] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:53.730 [2024-12-06 06:38:12.339461] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.730 [2024-12-06 06:38:12.342512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.730 [2024-12-06 06:38:12.342697] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:53.730 [2024-12-06 06:38:12.342928] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:53.730 [2024-12-06 06:38:12.343124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:53.730 pt1 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.730 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.989 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.989 "name": "raid_bdev1", 00:11:53.989 "uuid": "f49ebf00-75bc-41e0-808a-bc070d1bd60d", 00:11:53.989 "strip_size_kb": 64, 00:11:53.989 "state": "configuring", 00:11:53.989 "raid_level": "raid0", 00:11:53.989 "superblock": true, 00:11:53.989 "num_base_bdevs": 2, 00:11:53.989 "num_base_bdevs_discovered": 1, 00:11:53.989 "num_base_bdevs_operational": 2, 00:11:53.989 "base_bdevs_list": [ 00:11:53.989 { 00:11:53.989 "name": "pt1", 00:11:53.989 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:53.989 "is_configured": true, 00:11:53.989 "data_offset": 2048, 00:11:53.989 "data_size": 63488 00:11:53.989 }, 00:11:53.989 { 00:11:53.989 "name": null, 00:11:53.989 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:53.989 "is_configured": false, 00:11:53.989 "data_offset": 2048, 00:11:53.989 "data_size": 63488 00:11:53.989 } 00:11:53.989 ] 00:11:53.989 }' 00:11:53.989 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.989 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.248 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:11:54.248 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:54.248 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:54.248 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:54.248 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.248 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.248 [2024-12-06 06:38:12.839184] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:54.248 [2024-12-06 06:38:12.839306] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.248 [2024-12-06 06:38:12.839339] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:54.248 [2024-12-06 06:38:12.839357] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.248 [2024-12-06 06:38:12.840056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.248 [2024-12-06 06:38:12.840107] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:54.248 [2024-12-06 06:38:12.840242] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:54.248 [2024-12-06 06:38:12.840295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:54.248 [2024-12-06 06:38:12.840468] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:54.248 [2024-12-06 06:38:12.840505] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:54.248 [2024-12-06 06:38:12.840895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:54.248 [2024-12-06 06:38:12.841107] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:54.248 [2024-12-06 06:38:12.841123] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:54.248 [2024-12-06 06:38:12.841347] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.248 pt2 00:11:54.248 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.248 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:54.248 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:54.248 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:54.248 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.248 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.248 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:54.248 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.248 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:54.248 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.248 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.248 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.248 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.248 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.248 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.248 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.248 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.248 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.507 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.507 "name": "raid_bdev1", 00:11:54.507 "uuid": "f49ebf00-75bc-41e0-808a-bc070d1bd60d", 00:11:54.507 "strip_size_kb": 64, 00:11:54.507 "state": "online", 00:11:54.507 "raid_level": "raid0", 00:11:54.507 "superblock": true, 00:11:54.507 "num_base_bdevs": 2, 00:11:54.507 "num_base_bdevs_discovered": 2, 00:11:54.507 "num_base_bdevs_operational": 2, 00:11:54.507 "base_bdevs_list": [ 00:11:54.507 { 00:11:54.507 "name": "pt1", 00:11:54.507 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:54.507 "is_configured": true, 00:11:54.507 "data_offset": 2048, 00:11:54.507 "data_size": 63488 00:11:54.507 }, 00:11:54.507 { 00:11:54.507 "name": "pt2", 00:11:54.507 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:54.507 "is_configured": true, 00:11:54.507 "data_offset": 2048, 00:11:54.507 "data_size": 63488 00:11:54.507 } 00:11:54.507 ] 00:11:54.507 }' 00:11:54.507 06:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.507 06:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.766 06:38:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:54.766 06:38:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:54.766 06:38:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:54.766 06:38:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:54.766 06:38:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:54.766 06:38:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:54.766 06:38:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:54.766 06:38:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:54.766 06:38:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.766 06:38:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.766 [2024-12-06 06:38:13.315618] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:54.766 06:38:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.766 06:38:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:54.766 "name": "raid_bdev1", 00:11:54.766 "aliases": [ 00:11:54.766 "f49ebf00-75bc-41e0-808a-bc070d1bd60d" 00:11:54.766 ], 00:11:54.766 "product_name": "Raid Volume", 00:11:54.766 "block_size": 512, 00:11:54.766 "num_blocks": 126976, 00:11:54.766 "uuid": "f49ebf00-75bc-41e0-808a-bc070d1bd60d", 00:11:54.766 "assigned_rate_limits": { 00:11:54.766 "rw_ios_per_sec": 0, 00:11:54.766 "rw_mbytes_per_sec": 0, 00:11:54.766 "r_mbytes_per_sec": 0, 00:11:54.766 "w_mbytes_per_sec": 0 00:11:54.766 }, 00:11:54.766 "claimed": false, 00:11:54.766 "zoned": false, 00:11:54.766 "supported_io_types": { 00:11:54.766 "read": true, 00:11:54.766 "write": true, 00:11:54.766 "unmap": true, 00:11:54.766 "flush": true, 00:11:54.766 "reset": true, 00:11:54.766 "nvme_admin": false, 00:11:54.766 "nvme_io": false, 00:11:54.766 "nvme_io_md": false, 00:11:54.766 "write_zeroes": true, 00:11:54.766 "zcopy": false, 00:11:54.766 "get_zone_info": false, 00:11:54.766 "zone_management": false, 00:11:54.766 "zone_append": false, 00:11:54.766 "compare": false, 00:11:54.766 "compare_and_write": false, 00:11:54.766 "abort": false, 00:11:54.766 "seek_hole": false, 00:11:54.766 "seek_data": false, 00:11:54.766 "copy": false, 00:11:54.766 "nvme_iov_md": false 00:11:54.766 }, 00:11:54.766 "memory_domains": [ 00:11:54.766 { 00:11:54.766 "dma_device_id": "system", 00:11:54.766 "dma_device_type": 1 00:11:54.766 }, 00:11:54.766 { 00:11:54.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.766 "dma_device_type": 2 00:11:54.766 }, 00:11:54.766 { 00:11:54.766 "dma_device_id": "system", 00:11:54.766 "dma_device_type": 1 00:11:54.766 }, 00:11:54.766 { 00:11:54.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.766 "dma_device_type": 2 00:11:54.767 } 00:11:54.767 ], 00:11:54.767 "driver_specific": { 00:11:54.767 "raid": { 00:11:54.767 "uuid": "f49ebf00-75bc-41e0-808a-bc070d1bd60d", 00:11:54.767 "strip_size_kb": 64, 00:11:54.767 "state": "online", 00:11:54.767 "raid_level": "raid0", 00:11:54.767 "superblock": true, 00:11:54.767 "num_base_bdevs": 2, 00:11:54.767 "num_base_bdevs_discovered": 2, 00:11:54.767 "num_base_bdevs_operational": 2, 00:11:54.767 "base_bdevs_list": [ 00:11:54.767 { 00:11:54.767 "name": "pt1", 00:11:54.767 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:54.767 "is_configured": true, 00:11:54.767 "data_offset": 2048, 00:11:54.767 "data_size": 63488 00:11:54.767 }, 00:11:54.767 { 00:11:54.767 "name": "pt2", 00:11:54.767 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:54.767 "is_configured": true, 00:11:54.767 "data_offset": 2048, 00:11:54.767 "data_size": 63488 00:11:54.767 } 00:11:54.767 ] 00:11:54.767 } 00:11:54.767 } 00:11:54.767 }' 00:11:54.767 06:38:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:54.767 06:38:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:54.767 pt2' 00:11:54.767 06:38:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.025 [2024-12-06 06:38:13.543692] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f49ebf00-75bc-41e0-808a-bc070d1bd60d '!=' f49ebf00-75bc-41e0-808a-bc070d1bd60d ']' 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61344 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61344 ']' 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61344 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61344 00:11:55.025 killing process with pid 61344 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61344' 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61344 00:11:55.025 [2024-12-06 06:38:13.608966] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:55.025 06:38:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61344 00:11:55.025 [2024-12-06 06:38:13.609090] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:55.025 [2024-12-06 06:38:13.609159] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:55.025 [2024-12-06 06:38:13.609193] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:55.283 [2024-12-06 06:38:13.795389] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:56.217 06:38:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:56.217 00:11:56.217 real 0m4.711s 00:11:56.217 user 0m6.872s 00:11:56.217 sys 0m0.695s 00:11:56.217 06:38:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.217 06:38:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.217 ************************************ 00:11:56.217 END TEST raid_superblock_test 00:11:56.217 ************************************ 00:11:56.476 06:38:14 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:11:56.476 06:38:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:56.476 06:38:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.476 06:38:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:56.476 ************************************ 00:11:56.476 START TEST raid_read_error_test 00:11:56.476 ************************************ 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:56.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WGBZa8XWQQ 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61550 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61550 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61550 ']' 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.476 06:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.476 [2024-12-06 06:38:15.003952] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:11:56.476 [2024-12-06 06:38:15.004344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61550 ] 00:11:56.734 [2024-12-06 06:38:15.185372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.734 [2024-12-06 06:38:15.342592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.993 [2024-12-06 06:38:15.557768] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.993 [2024-12-06 06:38:15.557850] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.561 06:38:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.561 06:38:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:57.561 06:38:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:57.561 06:38:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:57.561 06:38:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.561 06:38:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.561 BaseBdev1_malloc 00:11:57.561 06:38:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.561 06:38:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:57.561 06:38:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.561 06:38:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.561 true 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.561 [2024-12-06 06:38:16.008944] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:57.561 [2024-12-06 06:38:16.009021] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.561 [2024-12-06 06:38:16.009056] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:57.561 [2024-12-06 06:38:16.009075] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.561 [2024-12-06 06:38:16.012047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.561 [2024-12-06 06:38:16.012119] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:57.561 BaseBdev1 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.561 BaseBdev2_malloc 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.561 true 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.561 [2024-12-06 06:38:16.072631] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:57.561 [2024-12-06 06:38:16.072866] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.561 [2024-12-06 06:38:16.072909] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:57.561 [2024-12-06 06:38:16.072929] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.561 [2024-12-06 06:38:16.075952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.561 [2024-12-06 06:38:16.076135] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:57.561 BaseBdev2 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.561 [2024-12-06 06:38:16.084995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:57.561 [2024-12-06 06:38:16.087677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:57.561 [2024-12-06 06:38:16.087976] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:57.561 [2024-12-06 06:38:16.088004] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:57.561 [2024-12-06 06:38:16.088363] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:57.561 [2024-12-06 06:38:16.088624] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:57.561 [2024-12-06 06:38:16.088658] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:57.561 [2024-12-06 06:38:16.088942] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.561 06:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.561 "name": "raid_bdev1", 00:11:57.562 "uuid": "190dba08-00b8-49b2-bcff-12333691194f", 00:11:57.562 "strip_size_kb": 64, 00:11:57.562 "state": "online", 00:11:57.562 "raid_level": "raid0", 00:11:57.562 "superblock": true, 00:11:57.562 "num_base_bdevs": 2, 00:11:57.562 "num_base_bdevs_discovered": 2, 00:11:57.562 "num_base_bdevs_operational": 2, 00:11:57.562 "base_bdevs_list": [ 00:11:57.562 { 00:11:57.562 "name": "BaseBdev1", 00:11:57.562 "uuid": "7e73f191-855e-5502-9bd0-1e6ff65ec04b", 00:11:57.562 "is_configured": true, 00:11:57.562 "data_offset": 2048, 00:11:57.562 "data_size": 63488 00:11:57.562 }, 00:11:57.562 { 00:11:57.562 "name": "BaseBdev2", 00:11:57.562 "uuid": "a09951ab-7d85-5831-9a10-a80e70391abf", 00:11:57.562 "is_configured": true, 00:11:57.562 "data_offset": 2048, 00:11:57.562 "data_size": 63488 00:11:57.562 } 00:11:57.562 ] 00:11:57.562 }' 00:11:57.562 06:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.562 06:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.128 06:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:58.128 06:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:58.128 [2024-12-06 06:38:16.678643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:59.061 06:38:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:59.061 06:38:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.061 06:38:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.061 06:38:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.061 06:38:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:59.061 06:38:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:59.061 06:38:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:59.061 06:38:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:59.061 06:38:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.061 06:38:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.061 06:38:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:59.061 06:38:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.061 06:38:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:59.061 06:38:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.061 06:38:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.061 06:38:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.061 06:38:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.061 06:38:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.061 06:38:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.061 06:38:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.061 06:38:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.061 06:38:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.061 06:38:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.061 "name": "raid_bdev1", 00:11:59.061 "uuid": "190dba08-00b8-49b2-bcff-12333691194f", 00:11:59.061 "strip_size_kb": 64, 00:11:59.061 "state": "online", 00:11:59.061 "raid_level": "raid0", 00:11:59.061 "superblock": true, 00:11:59.061 "num_base_bdevs": 2, 00:11:59.062 "num_base_bdevs_discovered": 2, 00:11:59.062 "num_base_bdevs_operational": 2, 00:11:59.062 "base_bdevs_list": [ 00:11:59.062 { 00:11:59.062 "name": "BaseBdev1", 00:11:59.062 "uuid": "7e73f191-855e-5502-9bd0-1e6ff65ec04b", 00:11:59.062 "is_configured": true, 00:11:59.062 "data_offset": 2048, 00:11:59.062 "data_size": 63488 00:11:59.062 }, 00:11:59.062 { 00:11:59.062 "name": "BaseBdev2", 00:11:59.062 "uuid": "a09951ab-7d85-5831-9a10-a80e70391abf", 00:11:59.062 "is_configured": true, 00:11:59.062 "data_offset": 2048, 00:11:59.062 "data_size": 63488 00:11:59.062 } 00:11:59.062 ] 00:11:59.062 }' 00:11:59.062 06:38:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.062 06:38:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.629 06:38:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:59.629 06:38:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.629 06:38:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.629 [2024-12-06 06:38:18.112853] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:59.629 [2024-12-06 06:38:18.112897] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:59.629 [2024-12-06 06:38:18.116376] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:59.629 [2024-12-06 06:38:18.116437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.630 [2024-12-06 06:38:18.116482] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:59.630 [2024-12-06 06:38:18.116500] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:59.630 { 00:11:59.630 "results": [ 00:11:59.630 { 00:11:59.630 "job": "raid_bdev1", 00:11:59.630 "core_mask": "0x1", 00:11:59.630 "workload": "randrw", 00:11:59.630 "percentage": 50, 00:11:59.630 "status": "finished", 00:11:59.630 "queue_depth": 1, 00:11:59.630 "io_size": 131072, 00:11:59.630 "runtime": 1.431688, 00:11:59.630 "iops": 9751.426288409206, 00:11:59.630 "mibps": 1218.9282860511507, 00:11:59.630 "io_failed": 1, 00:11:59.630 "io_timeout": 0, 00:11:59.630 "avg_latency_us": 142.69883085257388, 00:11:59.630 "min_latency_us": 44.21818181818182, 00:11:59.630 "max_latency_us": 1846.9236363636364 00:11:59.630 } 00:11:59.630 ], 00:11:59.630 "core_count": 1 00:11:59.630 } 00:11:59.630 06:38:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.630 06:38:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61550 00:11:59.630 06:38:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61550 ']' 00:11:59.630 06:38:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61550 00:11:59.630 06:38:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:59.630 06:38:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:59.630 06:38:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61550 00:11:59.630 killing process with pid 61550 00:11:59.630 06:38:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:59.630 06:38:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:59.630 06:38:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61550' 00:11:59.630 06:38:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61550 00:11:59.630 [2024-12-06 06:38:18.155424] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:59.630 06:38:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61550 00:11:59.887 [2024-12-06 06:38:18.281503] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:00.825 06:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WGBZa8XWQQ 00:12:00.825 06:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:00.825 06:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:00.825 06:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:12:00.825 06:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:00.825 06:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:00.825 06:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:00.825 06:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:12:00.825 00:12:00.825 real 0m4.526s 00:12:00.825 user 0m5.592s 00:12:00.825 sys 0m0.545s 00:12:00.825 06:38:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.825 ************************************ 00:12:00.825 END TEST raid_read_error_test 00:12:00.825 ************************************ 00:12:00.825 06:38:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.084 06:38:19 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:12:01.084 06:38:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:01.084 06:38:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.084 06:38:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:01.084 ************************************ 00:12:01.084 START TEST raid_write_error_test 00:12:01.084 ************************************ 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.oYIznJA9jl 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61697 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61697 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61697 ']' 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.084 06:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.084 [2024-12-06 06:38:19.590858] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:12:01.084 [2024-12-06 06:38:19.591017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61697 ] 00:12:01.343 [2024-12-06 06:38:19.766003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.343 [2024-12-06 06:38:19.902840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.601 [2024-12-06 06:38:20.109433] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:01.601 [2024-12-06 06:38:20.109494] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.168 BaseBdev1_malloc 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.168 true 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.168 [2024-12-06 06:38:20.648820] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:02.168 [2024-12-06 06:38:20.648895] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.168 [2024-12-06 06:38:20.648928] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:02.168 [2024-12-06 06:38:20.648947] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.168 [2024-12-06 06:38:20.651952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.168 [2024-12-06 06:38:20.652008] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:02.168 BaseBdev1 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.168 BaseBdev2_malloc 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.168 true 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.168 [2024-12-06 06:38:20.713401] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:02.168 [2024-12-06 06:38:20.713488] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.168 [2024-12-06 06:38:20.713520] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:02.168 [2024-12-06 06:38:20.713563] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.168 [2024-12-06 06:38:20.716475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.168 [2024-12-06 06:38:20.716555] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:02.168 BaseBdev2 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.168 [2024-12-06 06:38:20.725575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:02.168 [2024-12-06 06:38:20.728118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:02.168 [2024-12-06 06:38:20.728403] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:02.168 [2024-12-06 06:38:20.728431] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:02.168 [2024-12-06 06:38:20.728810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:02.168 [2024-12-06 06:38:20.729066] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:02.168 [2024-12-06 06:38:20.729090] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:02.168 [2024-12-06 06:38:20.729329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.168 06:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.168 "name": "raid_bdev1", 00:12:02.168 "uuid": "fd6d15ca-aaba-4537-8708-c542b184df28", 00:12:02.168 "strip_size_kb": 64, 00:12:02.168 "state": "online", 00:12:02.168 "raid_level": "raid0", 00:12:02.168 "superblock": true, 00:12:02.168 "num_base_bdevs": 2, 00:12:02.168 "num_base_bdevs_discovered": 2, 00:12:02.168 "num_base_bdevs_operational": 2, 00:12:02.169 "base_bdevs_list": [ 00:12:02.169 { 00:12:02.169 "name": "BaseBdev1", 00:12:02.169 "uuid": "9ae2bd59-c91e-506d-8348-78acd3525998", 00:12:02.169 "is_configured": true, 00:12:02.169 "data_offset": 2048, 00:12:02.169 "data_size": 63488 00:12:02.169 }, 00:12:02.169 { 00:12:02.169 "name": "BaseBdev2", 00:12:02.169 "uuid": "0a6dfb7c-df06-5efd-bda7-cb8db3f40bbc", 00:12:02.169 "is_configured": true, 00:12:02.169 "data_offset": 2048, 00:12:02.169 "data_size": 63488 00:12:02.169 } 00:12:02.169 ] 00:12:02.169 }' 00:12:02.169 06:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.169 06:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.735 06:38:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:02.735 06:38:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:02.994 [2024-12-06 06:38:21.403095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:03.931 06:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:03.931 06:38:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.931 06:38:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.931 06:38:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.931 06:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:03.931 06:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:03.931 06:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:12:03.931 06:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:03.931 06:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.931 06:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.931 06:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:03.931 06:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.931 06:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:03.931 06:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.931 06:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.932 06:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.932 06:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.932 06:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.932 06:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.932 06:38:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.932 06:38:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.932 06:38:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.932 06:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.932 "name": "raid_bdev1", 00:12:03.932 "uuid": "fd6d15ca-aaba-4537-8708-c542b184df28", 00:12:03.932 "strip_size_kb": 64, 00:12:03.932 "state": "online", 00:12:03.932 "raid_level": "raid0", 00:12:03.932 "superblock": true, 00:12:03.932 "num_base_bdevs": 2, 00:12:03.932 "num_base_bdevs_discovered": 2, 00:12:03.932 "num_base_bdevs_operational": 2, 00:12:03.932 "base_bdevs_list": [ 00:12:03.932 { 00:12:03.932 "name": "BaseBdev1", 00:12:03.932 "uuid": "9ae2bd59-c91e-506d-8348-78acd3525998", 00:12:03.932 "is_configured": true, 00:12:03.932 "data_offset": 2048, 00:12:03.932 "data_size": 63488 00:12:03.932 }, 00:12:03.932 { 00:12:03.932 "name": "BaseBdev2", 00:12:03.932 "uuid": "0a6dfb7c-df06-5efd-bda7-cb8db3f40bbc", 00:12:03.932 "is_configured": true, 00:12:03.932 "data_offset": 2048, 00:12:03.932 "data_size": 63488 00:12:03.932 } 00:12:03.932 ] 00:12:03.932 }' 00:12:03.932 06:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.932 06:38:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.499 06:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:04.499 06:38:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.499 06:38:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.499 [2024-12-06 06:38:22.849956] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:04.499 [2024-12-06 06:38:22.850142] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:04.499 [2024-12-06 06:38:22.853741] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:04.499 [2024-12-06 06:38:22.853940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.499 [2024-12-06 06:38:22.854111] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:04.499 [2024-12-06 06:38:22.854263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:04.499 { 00:12:04.499 "results": [ 00:12:04.499 { 00:12:04.499 "job": "raid_bdev1", 00:12:04.499 "core_mask": "0x1", 00:12:04.499 "workload": "randrw", 00:12:04.499 "percentage": 50, 00:12:04.499 "status": "finished", 00:12:04.499 "queue_depth": 1, 00:12:04.499 "io_size": 131072, 00:12:04.499 "runtime": 1.444416, 00:12:04.499 "iops": 9682.806061411671, 00:12:04.499 "mibps": 1210.350757676459, 00:12:04.499 "io_failed": 1, 00:12:04.499 "io_timeout": 0, 00:12:04.499 "avg_latency_us": 143.85183540560388, 00:12:04.499 "min_latency_us": 46.08, 00:12:04.499 "max_latency_us": 1861.8181818181818 00:12:04.499 } 00:12:04.499 ], 00:12:04.499 "core_count": 1 00:12:04.499 } 00:12:04.499 06:38:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.499 06:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61697 00:12:04.499 06:38:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61697 ']' 00:12:04.499 06:38:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61697 00:12:04.499 06:38:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:04.499 06:38:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.499 06:38:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61697 00:12:04.499 06:38:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:04.499 06:38:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:04.499 06:38:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61697' 00:12:04.499 killing process with pid 61697 00:12:04.499 06:38:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61697 00:12:04.499 [2024-12-06 06:38:22.893058] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:04.499 06:38:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61697 00:12:04.499 [2024-12-06 06:38:23.019464] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:05.876 06:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.oYIznJA9jl 00:12:05.876 06:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:05.876 06:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:05.876 06:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:12:05.876 06:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:05.876 06:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:05.876 06:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:05.876 06:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:12:05.876 00:12:05.876 real 0m4.752s 00:12:05.876 user 0m6.014s 00:12:05.876 sys 0m0.536s 00:12:05.876 ************************************ 00:12:05.876 END TEST raid_write_error_test 00:12:05.876 ************************************ 00:12:05.876 06:38:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.876 06:38:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.876 06:38:24 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:05.876 06:38:24 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:12:05.876 06:38:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:05.876 06:38:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.876 06:38:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:05.876 ************************************ 00:12:05.876 START TEST raid_state_function_test 00:12:05.876 ************************************ 00:12:05.876 06:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:12:05.876 06:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:05.876 06:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:12:05.876 06:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:05.876 06:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:05.876 06:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:05.876 06:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:05.876 06:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:05.876 06:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:05.876 06:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:05.876 06:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:05.876 06:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:05.876 06:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:05.876 06:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:05.876 06:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:05.876 06:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:05.876 06:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:05.876 06:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:05.876 06:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:05.876 06:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:05.877 06:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:05.877 Process raid pid: 61839 00:12:05.877 06:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:05.877 06:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:05.877 06:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:05.877 06:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61839 00:12:05.877 06:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61839' 00:12:05.877 06:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:05.877 06:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61839 00:12:05.877 06:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61839 ']' 00:12:05.877 06:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.877 06:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:05.877 06:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.877 06:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:05.877 06:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.877 [2024-12-06 06:38:24.402312] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:12:05.877 [2024-12-06 06:38:24.402502] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:06.136 [2024-12-06 06:38:24.591679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.136 [2024-12-06 06:38:24.727150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.416 [2024-12-06 06:38:24.939449] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:06.416 [2024-12-06 06:38:24.939492] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:07.009 06:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:07.009 06:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:07.009 06:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:07.009 06:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.009 06:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.009 [2024-12-06 06:38:25.444777] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:07.009 [2024-12-06 06:38:25.444853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:07.009 [2024-12-06 06:38:25.444871] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:07.009 [2024-12-06 06:38:25.444887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:07.009 06:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.009 06:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:07.009 06:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.010 06:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.010 06:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:07.010 06:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.010 06:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:07.010 06:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.010 06:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.010 06:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.010 06:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.010 06:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.010 06:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.010 06:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.010 06:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.010 06:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.010 06:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.010 "name": "Existed_Raid", 00:12:07.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.010 "strip_size_kb": 64, 00:12:07.010 "state": "configuring", 00:12:07.010 "raid_level": "concat", 00:12:07.010 "superblock": false, 00:12:07.010 "num_base_bdevs": 2, 00:12:07.010 "num_base_bdevs_discovered": 0, 00:12:07.010 "num_base_bdevs_operational": 2, 00:12:07.010 "base_bdevs_list": [ 00:12:07.010 { 00:12:07.010 "name": "BaseBdev1", 00:12:07.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.010 "is_configured": false, 00:12:07.010 "data_offset": 0, 00:12:07.010 "data_size": 0 00:12:07.010 }, 00:12:07.010 { 00:12:07.010 "name": "BaseBdev2", 00:12:07.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.010 "is_configured": false, 00:12:07.010 "data_offset": 0, 00:12:07.010 "data_size": 0 00:12:07.010 } 00:12:07.010 ] 00:12:07.010 }' 00:12:07.010 06:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.010 06:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.599 06:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:07.599 06:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.599 06:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.599 [2024-12-06 06:38:25.980862] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:07.599 [2024-12-06 06:38:25.980906] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:07.599 06:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.599 06:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:07.599 06:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.599 06:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.599 [2024-12-06 06:38:25.992872] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:07.599 [2024-12-06 06:38:25.993095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:07.599 [2024-12-06 06:38:25.993122] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:07.599 [2024-12-06 06:38:25.993144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:07.599 06:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.599 06:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:07.599 06:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.599 06:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.599 [2024-12-06 06:38:26.038819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:07.599 BaseBdev1 00:12:07.599 06:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.599 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:07.599 06:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:07.599 06:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:07.599 06:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:07.599 06:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:07.599 06:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:07.599 06:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:07.599 06:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.599 06:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.599 06:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.599 06:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:07.599 06:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.599 06:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.599 [ 00:12:07.599 { 00:12:07.599 "name": "BaseBdev1", 00:12:07.599 "aliases": [ 00:12:07.599 "76c5d38a-8c2b-4248-b331-2f866c7de63e" 00:12:07.599 ], 00:12:07.599 "product_name": "Malloc disk", 00:12:07.599 "block_size": 512, 00:12:07.599 "num_blocks": 65536, 00:12:07.599 "uuid": "76c5d38a-8c2b-4248-b331-2f866c7de63e", 00:12:07.599 "assigned_rate_limits": { 00:12:07.599 "rw_ios_per_sec": 0, 00:12:07.599 "rw_mbytes_per_sec": 0, 00:12:07.599 "r_mbytes_per_sec": 0, 00:12:07.599 "w_mbytes_per_sec": 0 00:12:07.599 }, 00:12:07.599 "claimed": true, 00:12:07.599 "claim_type": "exclusive_write", 00:12:07.599 "zoned": false, 00:12:07.599 "supported_io_types": { 00:12:07.599 "read": true, 00:12:07.599 "write": true, 00:12:07.599 "unmap": true, 00:12:07.599 "flush": true, 00:12:07.599 "reset": true, 00:12:07.599 "nvme_admin": false, 00:12:07.599 "nvme_io": false, 00:12:07.599 "nvme_io_md": false, 00:12:07.599 "write_zeroes": true, 00:12:07.599 "zcopy": true, 00:12:07.600 "get_zone_info": false, 00:12:07.600 "zone_management": false, 00:12:07.600 "zone_append": false, 00:12:07.600 "compare": false, 00:12:07.600 "compare_and_write": false, 00:12:07.600 "abort": true, 00:12:07.600 "seek_hole": false, 00:12:07.600 "seek_data": false, 00:12:07.600 "copy": true, 00:12:07.600 "nvme_iov_md": false 00:12:07.600 }, 00:12:07.600 "memory_domains": [ 00:12:07.600 { 00:12:07.600 "dma_device_id": "system", 00:12:07.600 "dma_device_type": 1 00:12:07.600 }, 00:12:07.600 { 00:12:07.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.600 "dma_device_type": 2 00:12:07.600 } 00:12:07.600 ], 00:12:07.600 "driver_specific": {} 00:12:07.600 } 00:12:07.600 ] 00:12:07.600 06:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.600 06:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:07.600 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:07.600 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.600 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.600 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:07.600 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.600 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:07.600 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.600 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.600 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.600 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.600 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.600 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.600 06:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.600 06:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.600 06:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.600 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.600 "name": "Existed_Raid", 00:12:07.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.600 "strip_size_kb": 64, 00:12:07.600 "state": "configuring", 00:12:07.600 "raid_level": "concat", 00:12:07.600 "superblock": false, 00:12:07.600 "num_base_bdevs": 2, 00:12:07.600 "num_base_bdevs_discovered": 1, 00:12:07.600 "num_base_bdevs_operational": 2, 00:12:07.600 "base_bdevs_list": [ 00:12:07.600 { 00:12:07.600 "name": "BaseBdev1", 00:12:07.600 "uuid": "76c5d38a-8c2b-4248-b331-2f866c7de63e", 00:12:07.600 "is_configured": true, 00:12:07.600 "data_offset": 0, 00:12:07.600 "data_size": 65536 00:12:07.600 }, 00:12:07.600 { 00:12:07.600 "name": "BaseBdev2", 00:12:07.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.600 "is_configured": false, 00:12:07.600 "data_offset": 0, 00:12:07.600 "data_size": 0 00:12:07.600 } 00:12:07.600 ] 00:12:07.600 }' 00:12:07.600 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.600 06:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.168 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:08.168 06:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.168 06:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.168 [2024-12-06 06:38:26.599025] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:08.168 [2024-12-06 06:38:26.599099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:08.168 06:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.168 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:08.168 06:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.168 06:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.168 [2024-12-06 06:38:26.607054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:08.168 [2024-12-06 06:38:26.609517] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:08.168 [2024-12-06 06:38:26.609575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:08.168 06:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.168 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:08.168 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:08.169 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:08.169 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.169 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.169 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:08.169 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.169 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:08.169 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.169 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.169 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.169 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.169 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.169 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.169 06:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.169 06:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.169 06:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.169 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.169 "name": "Existed_Raid", 00:12:08.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.169 "strip_size_kb": 64, 00:12:08.169 "state": "configuring", 00:12:08.169 "raid_level": "concat", 00:12:08.169 "superblock": false, 00:12:08.169 "num_base_bdevs": 2, 00:12:08.169 "num_base_bdevs_discovered": 1, 00:12:08.169 "num_base_bdevs_operational": 2, 00:12:08.169 "base_bdevs_list": [ 00:12:08.169 { 00:12:08.169 "name": "BaseBdev1", 00:12:08.169 "uuid": "76c5d38a-8c2b-4248-b331-2f866c7de63e", 00:12:08.169 "is_configured": true, 00:12:08.169 "data_offset": 0, 00:12:08.169 "data_size": 65536 00:12:08.169 }, 00:12:08.169 { 00:12:08.169 "name": "BaseBdev2", 00:12:08.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.169 "is_configured": false, 00:12:08.169 "data_offset": 0, 00:12:08.169 "data_size": 0 00:12:08.169 } 00:12:08.169 ] 00:12:08.169 }' 00:12:08.169 06:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.169 06:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.734 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:08.734 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.734 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.734 [2024-12-06 06:38:27.185742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:08.734 [2024-12-06 06:38:27.186011] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:08.734 [2024-12-06 06:38:27.186064] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:08.734 [2024-12-06 06:38:27.186540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:08.734 [2024-12-06 06:38:27.186894] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:08.734 [2024-12-06 06:38:27.187028] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:08.734 [2024-12-06 06:38:27.187485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.734 BaseBdev2 00:12:08.734 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.734 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:08.734 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:08.734 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:08.734 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:08.734 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:08.734 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:08.734 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:08.734 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.734 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.734 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.734 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:08.734 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.734 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.734 [ 00:12:08.734 { 00:12:08.734 "name": "BaseBdev2", 00:12:08.734 "aliases": [ 00:12:08.734 "dfc56cc9-bd37-40f3-8ea2-015e6e9df3bf" 00:12:08.734 ], 00:12:08.734 "product_name": "Malloc disk", 00:12:08.734 "block_size": 512, 00:12:08.734 "num_blocks": 65536, 00:12:08.734 "uuid": "dfc56cc9-bd37-40f3-8ea2-015e6e9df3bf", 00:12:08.734 "assigned_rate_limits": { 00:12:08.734 "rw_ios_per_sec": 0, 00:12:08.734 "rw_mbytes_per_sec": 0, 00:12:08.734 "r_mbytes_per_sec": 0, 00:12:08.734 "w_mbytes_per_sec": 0 00:12:08.734 }, 00:12:08.734 "claimed": true, 00:12:08.734 "claim_type": "exclusive_write", 00:12:08.734 "zoned": false, 00:12:08.734 "supported_io_types": { 00:12:08.734 "read": true, 00:12:08.734 "write": true, 00:12:08.734 "unmap": true, 00:12:08.734 "flush": true, 00:12:08.734 "reset": true, 00:12:08.734 "nvme_admin": false, 00:12:08.734 "nvme_io": false, 00:12:08.734 "nvme_io_md": false, 00:12:08.734 "write_zeroes": true, 00:12:08.734 "zcopy": true, 00:12:08.734 "get_zone_info": false, 00:12:08.734 "zone_management": false, 00:12:08.734 "zone_append": false, 00:12:08.734 "compare": false, 00:12:08.734 "compare_and_write": false, 00:12:08.734 "abort": true, 00:12:08.734 "seek_hole": false, 00:12:08.734 "seek_data": false, 00:12:08.734 "copy": true, 00:12:08.734 "nvme_iov_md": false 00:12:08.734 }, 00:12:08.734 "memory_domains": [ 00:12:08.734 { 00:12:08.734 "dma_device_id": "system", 00:12:08.734 "dma_device_type": 1 00:12:08.734 }, 00:12:08.734 { 00:12:08.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.734 "dma_device_type": 2 00:12:08.734 } 00:12:08.734 ], 00:12:08.734 "driver_specific": {} 00:12:08.734 } 00:12:08.734 ] 00:12:08.734 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.734 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:08.734 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:08.734 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:08.734 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:12:08.734 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.734 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.734 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:08.735 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.735 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:08.735 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.735 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.735 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.735 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.735 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.735 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.735 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.735 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.735 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.735 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.735 "name": "Existed_Raid", 00:12:08.735 "uuid": "5682011d-b819-48df-b8e6-ba018b8dde31", 00:12:08.735 "strip_size_kb": 64, 00:12:08.735 "state": "online", 00:12:08.735 "raid_level": "concat", 00:12:08.735 "superblock": false, 00:12:08.735 "num_base_bdevs": 2, 00:12:08.735 "num_base_bdevs_discovered": 2, 00:12:08.735 "num_base_bdevs_operational": 2, 00:12:08.735 "base_bdevs_list": [ 00:12:08.735 { 00:12:08.735 "name": "BaseBdev1", 00:12:08.735 "uuid": "76c5d38a-8c2b-4248-b331-2f866c7de63e", 00:12:08.735 "is_configured": true, 00:12:08.735 "data_offset": 0, 00:12:08.735 "data_size": 65536 00:12:08.735 }, 00:12:08.735 { 00:12:08.735 "name": "BaseBdev2", 00:12:08.735 "uuid": "dfc56cc9-bd37-40f3-8ea2-015e6e9df3bf", 00:12:08.735 "is_configured": true, 00:12:08.735 "data_offset": 0, 00:12:08.735 "data_size": 65536 00:12:08.735 } 00:12:08.735 ] 00:12:08.735 }' 00:12:08.735 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.735 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.301 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:09.301 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:09.301 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:09.301 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:09.301 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:09.301 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:09.301 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:09.301 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:09.301 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.301 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.301 [2024-12-06 06:38:27.738280] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:09.301 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.301 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:09.301 "name": "Existed_Raid", 00:12:09.301 "aliases": [ 00:12:09.301 "5682011d-b819-48df-b8e6-ba018b8dde31" 00:12:09.301 ], 00:12:09.301 "product_name": "Raid Volume", 00:12:09.301 "block_size": 512, 00:12:09.301 "num_blocks": 131072, 00:12:09.301 "uuid": "5682011d-b819-48df-b8e6-ba018b8dde31", 00:12:09.301 "assigned_rate_limits": { 00:12:09.301 "rw_ios_per_sec": 0, 00:12:09.301 "rw_mbytes_per_sec": 0, 00:12:09.301 "r_mbytes_per_sec": 0, 00:12:09.301 "w_mbytes_per_sec": 0 00:12:09.301 }, 00:12:09.301 "claimed": false, 00:12:09.301 "zoned": false, 00:12:09.301 "supported_io_types": { 00:12:09.301 "read": true, 00:12:09.301 "write": true, 00:12:09.301 "unmap": true, 00:12:09.301 "flush": true, 00:12:09.301 "reset": true, 00:12:09.301 "nvme_admin": false, 00:12:09.301 "nvme_io": false, 00:12:09.301 "nvme_io_md": false, 00:12:09.301 "write_zeroes": true, 00:12:09.301 "zcopy": false, 00:12:09.301 "get_zone_info": false, 00:12:09.301 "zone_management": false, 00:12:09.301 "zone_append": false, 00:12:09.301 "compare": false, 00:12:09.301 "compare_and_write": false, 00:12:09.301 "abort": false, 00:12:09.301 "seek_hole": false, 00:12:09.301 "seek_data": false, 00:12:09.301 "copy": false, 00:12:09.301 "nvme_iov_md": false 00:12:09.301 }, 00:12:09.301 "memory_domains": [ 00:12:09.301 { 00:12:09.301 "dma_device_id": "system", 00:12:09.301 "dma_device_type": 1 00:12:09.301 }, 00:12:09.301 { 00:12:09.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.301 "dma_device_type": 2 00:12:09.301 }, 00:12:09.301 { 00:12:09.301 "dma_device_id": "system", 00:12:09.301 "dma_device_type": 1 00:12:09.301 }, 00:12:09.301 { 00:12:09.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.301 "dma_device_type": 2 00:12:09.301 } 00:12:09.301 ], 00:12:09.301 "driver_specific": { 00:12:09.301 "raid": { 00:12:09.301 "uuid": "5682011d-b819-48df-b8e6-ba018b8dde31", 00:12:09.301 "strip_size_kb": 64, 00:12:09.301 "state": "online", 00:12:09.301 "raid_level": "concat", 00:12:09.301 "superblock": false, 00:12:09.301 "num_base_bdevs": 2, 00:12:09.301 "num_base_bdevs_discovered": 2, 00:12:09.301 "num_base_bdevs_operational": 2, 00:12:09.301 "base_bdevs_list": [ 00:12:09.301 { 00:12:09.301 "name": "BaseBdev1", 00:12:09.301 "uuid": "76c5d38a-8c2b-4248-b331-2f866c7de63e", 00:12:09.301 "is_configured": true, 00:12:09.301 "data_offset": 0, 00:12:09.301 "data_size": 65536 00:12:09.301 }, 00:12:09.301 { 00:12:09.301 "name": "BaseBdev2", 00:12:09.301 "uuid": "dfc56cc9-bd37-40f3-8ea2-015e6e9df3bf", 00:12:09.301 "is_configured": true, 00:12:09.301 "data_offset": 0, 00:12:09.301 "data_size": 65536 00:12:09.301 } 00:12:09.301 ] 00:12:09.301 } 00:12:09.301 } 00:12:09.301 }' 00:12:09.301 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:09.301 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:09.301 BaseBdev2' 00:12:09.301 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.301 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:09.301 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.301 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.301 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:09.301 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.301 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.301 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.301 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.301 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.301 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.301 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:09.301 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.301 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.301 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.560 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.560 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.560 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.560 06:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:09.560 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.560 06:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.560 [2024-12-06 06:38:27.978072] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:09.560 [2024-12-06 06:38:27.978235] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:09.560 [2024-12-06 06:38:27.978328] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:09.560 06:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.560 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:09.560 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:09.560 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:09.560 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:09.560 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:09.560 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:12:09.560 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.560 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:09.560 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:09.560 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:09.560 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:09.560 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.560 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.560 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.560 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.560 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.560 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.560 06:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.560 06:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.560 06:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.560 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.560 "name": "Existed_Raid", 00:12:09.560 "uuid": "5682011d-b819-48df-b8e6-ba018b8dde31", 00:12:09.560 "strip_size_kb": 64, 00:12:09.560 "state": "offline", 00:12:09.560 "raid_level": "concat", 00:12:09.560 "superblock": false, 00:12:09.560 "num_base_bdevs": 2, 00:12:09.560 "num_base_bdevs_discovered": 1, 00:12:09.560 "num_base_bdevs_operational": 1, 00:12:09.560 "base_bdevs_list": [ 00:12:09.560 { 00:12:09.560 "name": null, 00:12:09.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.560 "is_configured": false, 00:12:09.560 "data_offset": 0, 00:12:09.560 "data_size": 65536 00:12:09.560 }, 00:12:09.560 { 00:12:09.560 "name": "BaseBdev2", 00:12:09.560 "uuid": "dfc56cc9-bd37-40f3-8ea2-015e6e9df3bf", 00:12:09.560 "is_configured": true, 00:12:09.560 "data_offset": 0, 00:12:09.560 "data_size": 65536 00:12:09.560 } 00:12:09.560 ] 00:12:09.560 }' 00:12:09.560 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.560 06:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.126 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:10.126 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:10.126 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.126 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:10.126 06:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.126 06:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.126 06:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.126 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:10.126 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:10.126 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:10.126 06:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.126 06:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.126 [2024-12-06 06:38:28.648346] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:10.126 [2024-12-06 06:38:28.648413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:10.126 06:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.126 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:10.126 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:10.126 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.126 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:10.126 06:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.126 06:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.126 06:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.384 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:10.384 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:10.384 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:12:10.384 06:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61839 00:12:10.384 06:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61839 ']' 00:12:10.384 06:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61839 00:12:10.384 06:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:10.384 06:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:10.384 06:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61839 00:12:10.384 killing process with pid 61839 00:12:10.384 06:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:10.384 06:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:10.384 06:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61839' 00:12:10.384 06:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61839 00:12:10.384 [2024-12-06 06:38:28.815054] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:10.384 06:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61839 00:12:10.384 [2024-12-06 06:38:28.829874] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:11.318 00:12:11.318 real 0m5.600s 00:12:11.318 user 0m8.465s 00:12:11.318 sys 0m0.802s 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.318 ************************************ 00:12:11.318 END TEST raid_state_function_test 00:12:11.318 ************************************ 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.318 06:38:29 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:12:11.318 06:38:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:11.318 06:38:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.318 06:38:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:11.318 ************************************ 00:12:11.318 START TEST raid_state_function_test_sb 00:12:11.318 ************************************ 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.318 Process raid pid: 62098 00:12:11.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62098 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62098' 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:11.318 06:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62098 00:12:11.319 06:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62098 ']' 00:12:11.319 06:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.319 06:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.319 06:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.319 06:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.319 06:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.577 [2024-12-06 06:38:30.070207] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:12:11.577 [2024-12-06 06:38:30.070640] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.835 [2024-12-06 06:38:30.254363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.835 [2024-12-06 06:38:30.384024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.093 [2024-12-06 06:38:30.591712] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.093 [2024-12-06 06:38:30.591950] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.352 06:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:12.352 06:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:12.352 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:12.352 06:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.352 06:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.352 [2024-12-06 06:38:30.988444] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:12.352 [2024-12-06 06:38:30.988693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:12.352 [2024-12-06 06:38:30.988867] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:12.352 [2024-12-06 06:38:30.988934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:12.352 06:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.352 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:12.352 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.352 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.352 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:12.352 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.352 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:12.352 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.352 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.352 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.352 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.610 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.610 06:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.610 06:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.610 06:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.610 06:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.610 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.610 "name": "Existed_Raid", 00:12:12.610 "uuid": "12b2f480-8160-4b59-89cb-b346ea7ed8eb", 00:12:12.610 "strip_size_kb": 64, 00:12:12.610 "state": "configuring", 00:12:12.610 "raid_level": "concat", 00:12:12.610 "superblock": true, 00:12:12.610 "num_base_bdevs": 2, 00:12:12.610 "num_base_bdevs_discovered": 0, 00:12:12.610 "num_base_bdevs_operational": 2, 00:12:12.610 "base_bdevs_list": [ 00:12:12.610 { 00:12:12.610 "name": "BaseBdev1", 00:12:12.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.610 "is_configured": false, 00:12:12.610 "data_offset": 0, 00:12:12.610 "data_size": 0 00:12:12.610 }, 00:12:12.610 { 00:12:12.610 "name": "BaseBdev2", 00:12:12.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.610 "is_configured": false, 00:12:12.610 "data_offset": 0, 00:12:12.610 "data_size": 0 00:12:12.610 } 00:12:12.610 ] 00:12:12.610 }' 00:12:12.610 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.610 06:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.201 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:13.201 06:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.201 06:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.201 [2024-12-06 06:38:31.532485] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:13.201 [2024-12-06 06:38:31.532528] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:13.201 06:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.201 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:13.201 06:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.201 06:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.201 [2024-12-06 06:38:31.540489] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:13.201 [2024-12-06 06:38:31.540694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:13.201 [2024-12-06 06:38:31.540722] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:13.201 [2024-12-06 06:38:31.540745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:13.201 06:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.201 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:13.201 06:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.201 06:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.201 [2024-12-06 06:38:31.585144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.201 BaseBdev1 00:12:13.201 06:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.201 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:13.201 06:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:13.201 06:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:13.201 06:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:13.201 06:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:13.201 06:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:13.201 06:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:13.201 06:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.201 06:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.201 06:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.201 06:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:13.201 06:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.202 06:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.202 [ 00:12:13.202 { 00:12:13.202 "name": "BaseBdev1", 00:12:13.202 "aliases": [ 00:12:13.202 "b45eec04-ebd6-446c-a70d-c118bb1d5fa6" 00:12:13.202 ], 00:12:13.202 "product_name": "Malloc disk", 00:12:13.202 "block_size": 512, 00:12:13.202 "num_blocks": 65536, 00:12:13.202 "uuid": "b45eec04-ebd6-446c-a70d-c118bb1d5fa6", 00:12:13.202 "assigned_rate_limits": { 00:12:13.202 "rw_ios_per_sec": 0, 00:12:13.202 "rw_mbytes_per_sec": 0, 00:12:13.202 "r_mbytes_per_sec": 0, 00:12:13.202 "w_mbytes_per_sec": 0 00:12:13.202 }, 00:12:13.202 "claimed": true, 00:12:13.202 "claim_type": "exclusive_write", 00:12:13.202 "zoned": false, 00:12:13.202 "supported_io_types": { 00:12:13.202 "read": true, 00:12:13.202 "write": true, 00:12:13.202 "unmap": true, 00:12:13.202 "flush": true, 00:12:13.202 "reset": true, 00:12:13.202 "nvme_admin": false, 00:12:13.202 "nvme_io": false, 00:12:13.202 "nvme_io_md": false, 00:12:13.202 "write_zeroes": true, 00:12:13.202 "zcopy": true, 00:12:13.202 "get_zone_info": false, 00:12:13.202 "zone_management": false, 00:12:13.202 "zone_append": false, 00:12:13.202 "compare": false, 00:12:13.202 "compare_and_write": false, 00:12:13.202 "abort": true, 00:12:13.202 "seek_hole": false, 00:12:13.202 "seek_data": false, 00:12:13.202 "copy": true, 00:12:13.202 "nvme_iov_md": false 00:12:13.202 }, 00:12:13.202 "memory_domains": [ 00:12:13.202 { 00:12:13.202 "dma_device_id": "system", 00:12:13.202 "dma_device_type": 1 00:12:13.202 }, 00:12:13.202 { 00:12:13.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.202 "dma_device_type": 2 00:12:13.202 } 00:12:13.202 ], 00:12:13.202 "driver_specific": {} 00:12:13.202 } 00:12:13.202 ] 00:12:13.202 06:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.202 06:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:13.202 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:13.202 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.202 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.202 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:13.202 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.202 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:13.202 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.202 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.202 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.202 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.202 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.202 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.202 06:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.202 06:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.202 06:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.202 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.202 "name": "Existed_Raid", 00:12:13.202 "uuid": "3550833c-97b1-4edc-a15c-d869722cd941", 00:12:13.202 "strip_size_kb": 64, 00:12:13.202 "state": "configuring", 00:12:13.202 "raid_level": "concat", 00:12:13.202 "superblock": true, 00:12:13.202 "num_base_bdevs": 2, 00:12:13.202 "num_base_bdevs_discovered": 1, 00:12:13.202 "num_base_bdevs_operational": 2, 00:12:13.202 "base_bdevs_list": [ 00:12:13.202 { 00:12:13.202 "name": "BaseBdev1", 00:12:13.202 "uuid": "b45eec04-ebd6-446c-a70d-c118bb1d5fa6", 00:12:13.202 "is_configured": true, 00:12:13.202 "data_offset": 2048, 00:12:13.202 "data_size": 63488 00:12:13.202 }, 00:12:13.202 { 00:12:13.202 "name": "BaseBdev2", 00:12:13.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.202 "is_configured": false, 00:12:13.202 "data_offset": 0, 00:12:13.202 "data_size": 0 00:12:13.202 } 00:12:13.202 ] 00:12:13.202 }' 00:12:13.202 06:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.202 06:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.769 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:13.769 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.769 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.769 [2024-12-06 06:38:32.133375] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:13.769 [2024-12-06 06:38:32.133438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:13.769 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.769 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:13.769 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.769 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.769 [2024-12-06 06:38:32.141395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.769 [2024-12-06 06:38:32.143969] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:13.769 [2024-12-06 06:38:32.144030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:13.769 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.769 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:13.769 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:13.769 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:13.769 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.769 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.769 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:13.769 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.769 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:13.769 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.769 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.769 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.769 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.769 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.769 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.769 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.769 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.769 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.769 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.769 "name": "Existed_Raid", 00:12:13.769 "uuid": "6ff204d6-527a-4780-b068-314ccf6f97ee", 00:12:13.769 "strip_size_kb": 64, 00:12:13.769 "state": "configuring", 00:12:13.769 "raid_level": "concat", 00:12:13.769 "superblock": true, 00:12:13.769 "num_base_bdevs": 2, 00:12:13.769 "num_base_bdevs_discovered": 1, 00:12:13.769 "num_base_bdevs_operational": 2, 00:12:13.769 "base_bdevs_list": [ 00:12:13.769 { 00:12:13.769 "name": "BaseBdev1", 00:12:13.769 "uuid": "b45eec04-ebd6-446c-a70d-c118bb1d5fa6", 00:12:13.769 "is_configured": true, 00:12:13.769 "data_offset": 2048, 00:12:13.769 "data_size": 63488 00:12:13.769 }, 00:12:13.769 { 00:12:13.769 "name": "BaseBdev2", 00:12:13.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.769 "is_configured": false, 00:12:13.769 "data_offset": 0, 00:12:13.769 "data_size": 0 00:12:13.769 } 00:12:13.769 ] 00:12:13.769 }' 00:12:13.769 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.769 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.028 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:14.028 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.028 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.300 [2024-12-06 06:38:32.680742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:14.300 [2024-12-06 06:38:32.681068] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:14.300 [2024-12-06 06:38:32.681088] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:14.300 [2024-12-06 06:38:32.681430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:14.300 BaseBdev2 00:12:14.300 [2024-12-06 06:38:32.681659] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:14.300 [2024-12-06 06:38:32.681682] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:14.300 [2024-12-06 06:38:32.681852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.300 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.300 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:14.300 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:14.300 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:14.300 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:14.300 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:14.300 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:14.300 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:14.300 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.300 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.300 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.300 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:14.300 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.300 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.300 [ 00:12:14.300 { 00:12:14.300 "name": "BaseBdev2", 00:12:14.300 "aliases": [ 00:12:14.300 "bf2302cf-2be5-47ae-8e35-defff3a3d913" 00:12:14.300 ], 00:12:14.300 "product_name": "Malloc disk", 00:12:14.300 "block_size": 512, 00:12:14.300 "num_blocks": 65536, 00:12:14.300 "uuid": "bf2302cf-2be5-47ae-8e35-defff3a3d913", 00:12:14.300 "assigned_rate_limits": { 00:12:14.300 "rw_ios_per_sec": 0, 00:12:14.300 "rw_mbytes_per_sec": 0, 00:12:14.300 "r_mbytes_per_sec": 0, 00:12:14.300 "w_mbytes_per_sec": 0 00:12:14.300 }, 00:12:14.300 "claimed": true, 00:12:14.300 "claim_type": "exclusive_write", 00:12:14.300 "zoned": false, 00:12:14.300 "supported_io_types": { 00:12:14.300 "read": true, 00:12:14.300 "write": true, 00:12:14.300 "unmap": true, 00:12:14.300 "flush": true, 00:12:14.300 "reset": true, 00:12:14.300 "nvme_admin": false, 00:12:14.300 "nvme_io": false, 00:12:14.300 "nvme_io_md": false, 00:12:14.300 "write_zeroes": true, 00:12:14.300 "zcopy": true, 00:12:14.300 "get_zone_info": false, 00:12:14.300 "zone_management": false, 00:12:14.300 "zone_append": false, 00:12:14.300 "compare": false, 00:12:14.300 "compare_and_write": false, 00:12:14.300 "abort": true, 00:12:14.300 "seek_hole": false, 00:12:14.300 "seek_data": false, 00:12:14.300 "copy": true, 00:12:14.300 "nvme_iov_md": false 00:12:14.300 }, 00:12:14.300 "memory_domains": [ 00:12:14.300 { 00:12:14.300 "dma_device_id": "system", 00:12:14.300 "dma_device_type": 1 00:12:14.300 }, 00:12:14.300 { 00:12:14.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.300 "dma_device_type": 2 00:12:14.300 } 00:12:14.300 ], 00:12:14.300 "driver_specific": {} 00:12:14.300 } 00:12:14.300 ] 00:12:14.300 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.300 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:14.300 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:14.300 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:14.300 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:12:14.300 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.300 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.300 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:14.300 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.301 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:14.301 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.301 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.301 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.301 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.301 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.301 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.301 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.301 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.301 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.301 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.301 "name": "Existed_Raid", 00:12:14.301 "uuid": "6ff204d6-527a-4780-b068-314ccf6f97ee", 00:12:14.301 "strip_size_kb": 64, 00:12:14.301 "state": "online", 00:12:14.301 "raid_level": "concat", 00:12:14.301 "superblock": true, 00:12:14.301 "num_base_bdevs": 2, 00:12:14.301 "num_base_bdevs_discovered": 2, 00:12:14.301 "num_base_bdevs_operational": 2, 00:12:14.301 "base_bdevs_list": [ 00:12:14.301 { 00:12:14.301 "name": "BaseBdev1", 00:12:14.301 "uuid": "b45eec04-ebd6-446c-a70d-c118bb1d5fa6", 00:12:14.301 "is_configured": true, 00:12:14.301 "data_offset": 2048, 00:12:14.301 "data_size": 63488 00:12:14.301 }, 00:12:14.301 { 00:12:14.301 "name": "BaseBdev2", 00:12:14.301 "uuid": "bf2302cf-2be5-47ae-8e35-defff3a3d913", 00:12:14.301 "is_configured": true, 00:12:14.301 "data_offset": 2048, 00:12:14.301 "data_size": 63488 00:12:14.301 } 00:12:14.301 ] 00:12:14.301 }' 00:12:14.301 06:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.301 06:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.880 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:14.880 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:14.880 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:14.880 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:14.880 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:14.880 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:14.880 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:14.880 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:14.880 06:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.880 06:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.880 [2024-12-06 06:38:33.241308] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:14.880 06:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.880 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:14.880 "name": "Existed_Raid", 00:12:14.880 "aliases": [ 00:12:14.880 "6ff204d6-527a-4780-b068-314ccf6f97ee" 00:12:14.880 ], 00:12:14.880 "product_name": "Raid Volume", 00:12:14.880 "block_size": 512, 00:12:14.880 "num_blocks": 126976, 00:12:14.880 "uuid": "6ff204d6-527a-4780-b068-314ccf6f97ee", 00:12:14.880 "assigned_rate_limits": { 00:12:14.880 "rw_ios_per_sec": 0, 00:12:14.880 "rw_mbytes_per_sec": 0, 00:12:14.880 "r_mbytes_per_sec": 0, 00:12:14.880 "w_mbytes_per_sec": 0 00:12:14.880 }, 00:12:14.880 "claimed": false, 00:12:14.880 "zoned": false, 00:12:14.880 "supported_io_types": { 00:12:14.880 "read": true, 00:12:14.880 "write": true, 00:12:14.880 "unmap": true, 00:12:14.880 "flush": true, 00:12:14.880 "reset": true, 00:12:14.880 "nvme_admin": false, 00:12:14.880 "nvme_io": false, 00:12:14.880 "nvme_io_md": false, 00:12:14.880 "write_zeroes": true, 00:12:14.880 "zcopy": false, 00:12:14.880 "get_zone_info": false, 00:12:14.880 "zone_management": false, 00:12:14.880 "zone_append": false, 00:12:14.880 "compare": false, 00:12:14.880 "compare_and_write": false, 00:12:14.880 "abort": false, 00:12:14.880 "seek_hole": false, 00:12:14.880 "seek_data": false, 00:12:14.880 "copy": false, 00:12:14.880 "nvme_iov_md": false 00:12:14.880 }, 00:12:14.880 "memory_domains": [ 00:12:14.880 { 00:12:14.880 "dma_device_id": "system", 00:12:14.880 "dma_device_type": 1 00:12:14.880 }, 00:12:14.880 { 00:12:14.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.880 "dma_device_type": 2 00:12:14.880 }, 00:12:14.880 { 00:12:14.880 "dma_device_id": "system", 00:12:14.880 "dma_device_type": 1 00:12:14.880 }, 00:12:14.880 { 00:12:14.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.880 "dma_device_type": 2 00:12:14.880 } 00:12:14.880 ], 00:12:14.880 "driver_specific": { 00:12:14.880 "raid": { 00:12:14.880 "uuid": "6ff204d6-527a-4780-b068-314ccf6f97ee", 00:12:14.880 "strip_size_kb": 64, 00:12:14.880 "state": "online", 00:12:14.880 "raid_level": "concat", 00:12:14.880 "superblock": true, 00:12:14.880 "num_base_bdevs": 2, 00:12:14.880 "num_base_bdevs_discovered": 2, 00:12:14.880 "num_base_bdevs_operational": 2, 00:12:14.880 "base_bdevs_list": [ 00:12:14.880 { 00:12:14.880 "name": "BaseBdev1", 00:12:14.880 "uuid": "b45eec04-ebd6-446c-a70d-c118bb1d5fa6", 00:12:14.880 "is_configured": true, 00:12:14.880 "data_offset": 2048, 00:12:14.880 "data_size": 63488 00:12:14.880 }, 00:12:14.880 { 00:12:14.880 "name": "BaseBdev2", 00:12:14.880 "uuid": "bf2302cf-2be5-47ae-8e35-defff3a3d913", 00:12:14.880 "is_configured": true, 00:12:14.880 "data_offset": 2048, 00:12:14.880 "data_size": 63488 00:12:14.880 } 00:12:14.880 ] 00:12:14.880 } 00:12:14.880 } 00:12:14.880 }' 00:12:14.880 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:14.880 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:14.880 BaseBdev2' 00:12:14.880 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.880 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:14.880 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.880 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:14.880 06:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.880 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.880 06:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.880 06:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.880 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.880 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.880 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.880 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:14.880 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.880 06:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.880 06:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.880 06:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.139 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.139 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.139 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:15.139 06:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.139 06:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.139 [2024-12-06 06:38:33.541119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:15.139 [2024-12-06 06:38:33.541163] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:15.139 [2024-12-06 06:38:33.541244] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.139 06:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.139 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:15.139 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:15.139 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:15.139 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:15.139 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:15.139 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:12:15.139 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.139 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:15.139 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:15.139 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.139 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:15.139 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.139 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.139 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.139 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.139 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.139 06:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.139 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.139 06:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.139 06:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.139 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.139 "name": "Existed_Raid", 00:12:15.139 "uuid": "6ff204d6-527a-4780-b068-314ccf6f97ee", 00:12:15.139 "strip_size_kb": 64, 00:12:15.139 "state": "offline", 00:12:15.139 "raid_level": "concat", 00:12:15.139 "superblock": true, 00:12:15.139 "num_base_bdevs": 2, 00:12:15.139 "num_base_bdevs_discovered": 1, 00:12:15.139 "num_base_bdevs_operational": 1, 00:12:15.139 "base_bdevs_list": [ 00:12:15.139 { 00:12:15.139 "name": null, 00:12:15.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.139 "is_configured": false, 00:12:15.139 "data_offset": 0, 00:12:15.139 "data_size": 63488 00:12:15.139 }, 00:12:15.139 { 00:12:15.139 "name": "BaseBdev2", 00:12:15.139 "uuid": "bf2302cf-2be5-47ae-8e35-defff3a3d913", 00:12:15.139 "is_configured": true, 00:12:15.139 "data_offset": 2048, 00:12:15.139 "data_size": 63488 00:12:15.139 } 00:12:15.139 ] 00:12:15.139 }' 00:12:15.139 06:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.139 06:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.707 [2024-12-06 06:38:34.155920] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:15.707 [2024-12-06 06:38:34.156002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62098 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62098 ']' 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62098 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62098 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:15.707 killing process with pid 62098 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62098' 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62098 00:12:15.707 [2024-12-06 06:38:34.336614] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:15.707 06:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62098 00:12:15.965 [2024-12-06 06:38:34.351374] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:16.901 06:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:16.901 00:12:16.901 real 0m5.526s 00:12:16.901 user 0m8.270s 00:12:16.901 sys 0m0.791s 00:12:16.901 06:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.901 ************************************ 00:12:16.901 END TEST raid_state_function_test_sb 00:12:16.901 ************************************ 00:12:16.901 06:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.901 06:38:35 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:12:16.901 06:38:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:16.901 06:38:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.901 06:38:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:16.901 ************************************ 00:12:16.901 START TEST raid_superblock_test 00:12:16.901 ************************************ 00:12:16.901 06:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:12:16.901 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:12:16.901 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:12:16.901 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:16.901 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:16.901 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:16.901 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:16.901 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:16.901 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:16.901 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:16.901 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:16.901 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:16.901 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:16.901 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:16.901 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:12:16.901 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:16.901 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:16.901 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62350 00:12:16.901 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62350 00:12:16.901 06:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62350 ']' 00:12:16.901 06:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.901 06:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:16.901 06:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:16.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.901 06:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.901 06:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:16.901 06:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.159 [2024-12-06 06:38:35.623363] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:12:17.159 [2024-12-06 06:38:35.623538] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62350 ] 00:12:17.159 [2024-12-06 06:38:35.799797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.417 [2024-12-06 06:38:35.931500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.676 [2024-12-06 06:38:36.137955] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.676 [2024-12-06 06:38:36.138032] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.938 06:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:17.938 06:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:17.938 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:17.938 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:17.938 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:17.938 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:17.938 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:17.938 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:17.938 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:17.938 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:17.938 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:17.938 06:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.938 06:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.196 malloc1 00:12:18.196 06:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.196 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.197 [2024-12-06 06:38:36.627166] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:18.197 [2024-12-06 06:38:36.627255] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.197 [2024-12-06 06:38:36.627295] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:18.197 [2024-12-06 06:38:36.627314] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.197 [2024-12-06 06:38:36.631105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.197 [2024-12-06 06:38:36.631165] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:18.197 pt1 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.197 malloc2 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.197 [2024-12-06 06:38:36.691406] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:18.197 [2024-12-06 06:38:36.691655] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.197 [2024-12-06 06:38:36.691843] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:18.197 [2024-12-06 06:38:36.691996] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.197 [2024-12-06 06:38:36.695445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.197 [2024-12-06 06:38:36.695650] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:18.197 pt2 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.197 [2024-12-06 06:38:36.700052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:18.197 [2024-12-06 06:38:36.703133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:18.197 [2024-12-06 06:38:36.703565] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:18.197 [2024-12-06 06:38:36.703724] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:18.197 [2024-12-06 06:38:36.704289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:18.197 [2024-12-06 06:38:36.704681] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:18.197 [2024-12-06 06:38:36.704838] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:18.197 [2024-12-06 06:38:36.705331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.197 "name": "raid_bdev1", 00:12:18.197 "uuid": "c7778609-f698-47e1-b595-2f0862e44c58", 00:12:18.197 "strip_size_kb": 64, 00:12:18.197 "state": "online", 00:12:18.197 "raid_level": "concat", 00:12:18.197 "superblock": true, 00:12:18.197 "num_base_bdevs": 2, 00:12:18.197 "num_base_bdevs_discovered": 2, 00:12:18.197 "num_base_bdevs_operational": 2, 00:12:18.197 "base_bdevs_list": [ 00:12:18.197 { 00:12:18.197 "name": "pt1", 00:12:18.197 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:18.197 "is_configured": true, 00:12:18.197 "data_offset": 2048, 00:12:18.197 "data_size": 63488 00:12:18.197 }, 00:12:18.197 { 00:12:18.197 "name": "pt2", 00:12:18.197 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:18.197 "is_configured": true, 00:12:18.197 "data_offset": 2048, 00:12:18.197 "data_size": 63488 00:12:18.197 } 00:12:18.197 ] 00:12:18.197 }' 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.197 06:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.764 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:18.764 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:18.764 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:18.764 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:18.764 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:18.764 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:18.764 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:18.764 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.764 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.764 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:18.764 [2024-12-06 06:38:37.201754] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:18.764 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.764 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:18.764 "name": "raid_bdev1", 00:12:18.764 "aliases": [ 00:12:18.764 "c7778609-f698-47e1-b595-2f0862e44c58" 00:12:18.764 ], 00:12:18.764 "product_name": "Raid Volume", 00:12:18.764 "block_size": 512, 00:12:18.764 "num_blocks": 126976, 00:12:18.764 "uuid": "c7778609-f698-47e1-b595-2f0862e44c58", 00:12:18.764 "assigned_rate_limits": { 00:12:18.764 "rw_ios_per_sec": 0, 00:12:18.764 "rw_mbytes_per_sec": 0, 00:12:18.764 "r_mbytes_per_sec": 0, 00:12:18.764 "w_mbytes_per_sec": 0 00:12:18.764 }, 00:12:18.764 "claimed": false, 00:12:18.764 "zoned": false, 00:12:18.764 "supported_io_types": { 00:12:18.764 "read": true, 00:12:18.764 "write": true, 00:12:18.764 "unmap": true, 00:12:18.764 "flush": true, 00:12:18.764 "reset": true, 00:12:18.764 "nvme_admin": false, 00:12:18.764 "nvme_io": false, 00:12:18.764 "nvme_io_md": false, 00:12:18.764 "write_zeroes": true, 00:12:18.764 "zcopy": false, 00:12:18.764 "get_zone_info": false, 00:12:18.764 "zone_management": false, 00:12:18.764 "zone_append": false, 00:12:18.764 "compare": false, 00:12:18.764 "compare_and_write": false, 00:12:18.764 "abort": false, 00:12:18.764 "seek_hole": false, 00:12:18.764 "seek_data": false, 00:12:18.764 "copy": false, 00:12:18.764 "nvme_iov_md": false 00:12:18.764 }, 00:12:18.764 "memory_domains": [ 00:12:18.764 { 00:12:18.764 "dma_device_id": "system", 00:12:18.764 "dma_device_type": 1 00:12:18.764 }, 00:12:18.764 { 00:12:18.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.764 "dma_device_type": 2 00:12:18.764 }, 00:12:18.764 { 00:12:18.764 "dma_device_id": "system", 00:12:18.764 "dma_device_type": 1 00:12:18.764 }, 00:12:18.764 { 00:12:18.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.764 "dma_device_type": 2 00:12:18.764 } 00:12:18.764 ], 00:12:18.764 "driver_specific": { 00:12:18.764 "raid": { 00:12:18.764 "uuid": "c7778609-f698-47e1-b595-2f0862e44c58", 00:12:18.764 "strip_size_kb": 64, 00:12:18.764 "state": "online", 00:12:18.764 "raid_level": "concat", 00:12:18.764 "superblock": true, 00:12:18.764 "num_base_bdevs": 2, 00:12:18.764 "num_base_bdevs_discovered": 2, 00:12:18.764 "num_base_bdevs_operational": 2, 00:12:18.764 "base_bdevs_list": [ 00:12:18.764 { 00:12:18.764 "name": "pt1", 00:12:18.764 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:18.764 "is_configured": true, 00:12:18.764 "data_offset": 2048, 00:12:18.764 "data_size": 63488 00:12:18.764 }, 00:12:18.764 { 00:12:18.764 "name": "pt2", 00:12:18.764 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:18.764 "is_configured": true, 00:12:18.764 "data_offset": 2048, 00:12:18.764 "data_size": 63488 00:12:18.764 } 00:12:18.764 ] 00:12:18.764 } 00:12:18.764 } 00:12:18.764 }' 00:12:18.764 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:18.764 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:18.764 pt2' 00:12:18.764 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.764 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:18.764 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.764 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.764 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:18.764 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.764 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.764 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.764 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.764 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.764 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.764 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:18.764 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.764 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.764 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.024 [2024-12-06 06:38:37.465814] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c7778609-f698-47e1-b595-2f0862e44c58 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c7778609-f698-47e1-b595-2f0862e44c58 ']' 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.024 [2024-12-06 06:38:37.509424] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:19.024 [2024-12-06 06:38:37.509604] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:19.024 [2024-12-06 06:38:37.509752] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:19.024 [2024-12-06 06:38:37.509819] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:19.024 [2024-12-06 06:38:37.509840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.024 [2024-12-06 06:38:37.645632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:19.024 [2024-12-06 06:38:37.648742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:19.024 [2024-12-06 06:38:37.648870] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:19.024 [2024-12-06 06:38:37.648979] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:19.024 [2024-12-06 06:38:37.649020] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:19.024 [2024-12-06 06:38:37.649047] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:19.024 request: 00:12:19.024 { 00:12:19.024 "name": "raid_bdev1", 00:12:19.024 "raid_level": "concat", 00:12:19.024 "base_bdevs": [ 00:12:19.024 "malloc1", 00:12:19.024 "malloc2" 00:12:19.024 ], 00:12:19.024 "strip_size_kb": 64, 00:12:19.024 "superblock": false, 00:12:19.024 "method": "bdev_raid_create", 00:12:19.024 "req_id": 1 00:12:19.024 } 00:12:19.024 Got JSON-RPC error response 00:12:19.024 response: 00:12:19.024 { 00:12:19.024 "code": -17, 00:12:19.024 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:19.024 } 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.024 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.283 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.283 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:19.283 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:19.283 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:19.283 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.283 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.283 [2024-12-06 06:38:37.717736] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:19.283 [2024-12-06 06:38:37.717818] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.283 [2024-12-06 06:38:37.717847] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:19.283 [2024-12-06 06:38:37.717865] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.283 [2024-12-06 06:38:37.720873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.283 [2024-12-06 06:38:37.720923] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:19.283 [2024-12-06 06:38:37.721049] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:19.283 [2024-12-06 06:38:37.721135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:19.283 pt1 00:12:19.283 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.283 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:12:19.283 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.283 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.283 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:19.283 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.283 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:19.283 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.283 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.283 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.283 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.283 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.283 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.283 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.283 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.283 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.283 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.283 "name": "raid_bdev1", 00:12:19.283 "uuid": "c7778609-f698-47e1-b595-2f0862e44c58", 00:12:19.283 "strip_size_kb": 64, 00:12:19.283 "state": "configuring", 00:12:19.283 "raid_level": "concat", 00:12:19.283 "superblock": true, 00:12:19.283 "num_base_bdevs": 2, 00:12:19.283 "num_base_bdevs_discovered": 1, 00:12:19.283 "num_base_bdevs_operational": 2, 00:12:19.283 "base_bdevs_list": [ 00:12:19.283 { 00:12:19.283 "name": "pt1", 00:12:19.283 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:19.283 "is_configured": true, 00:12:19.283 "data_offset": 2048, 00:12:19.283 "data_size": 63488 00:12:19.283 }, 00:12:19.283 { 00:12:19.283 "name": null, 00:12:19.283 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:19.283 "is_configured": false, 00:12:19.283 "data_offset": 2048, 00:12:19.283 "data_size": 63488 00:12:19.283 } 00:12:19.283 ] 00:12:19.283 }' 00:12:19.283 06:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.283 06:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.850 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:12:19.850 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:19.850 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:19.850 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:19.850 06:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.851 06:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.851 [2024-12-06 06:38:38.253888] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:19.851 [2024-12-06 06:38:38.254117] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.851 [2024-12-06 06:38:38.254157] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:19.851 [2024-12-06 06:38:38.254176] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.851 [2024-12-06 06:38:38.254765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.851 [2024-12-06 06:38:38.254807] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:19.851 [2024-12-06 06:38:38.254908] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:19.851 [2024-12-06 06:38:38.254954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:19.851 [2024-12-06 06:38:38.255095] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:19.851 [2024-12-06 06:38:38.255114] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:19.851 [2024-12-06 06:38:38.255419] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:19.851 [2024-12-06 06:38:38.255612] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:19.851 [2024-12-06 06:38:38.255627] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:19.851 [2024-12-06 06:38:38.255800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.851 pt2 00:12:19.851 06:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.851 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:19.851 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:19.851 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:19.851 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.851 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.851 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:19.851 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.851 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:19.851 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.851 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.851 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.851 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.851 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.851 06:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.851 06:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.851 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.851 06:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.851 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.851 "name": "raid_bdev1", 00:12:19.851 "uuid": "c7778609-f698-47e1-b595-2f0862e44c58", 00:12:19.851 "strip_size_kb": 64, 00:12:19.851 "state": "online", 00:12:19.851 "raid_level": "concat", 00:12:19.851 "superblock": true, 00:12:19.851 "num_base_bdevs": 2, 00:12:19.851 "num_base_bdevs_discovered": 2, 00:12:19.851 "num_base_bdevs_operational": 2, 00:12:19.851 "base_bdevs_list": [ 00:12:19.851 { 00:12:19.851 "name": "pt1", 00:12:19.851 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:19.851 "is_configured": true, 00:12:19.851 "data_offset": 2048, 00:12:19.851 "data_size": 63488 00:12:19.851 }, 00:12:19.851 { 00:12:19.851 "name": "pt2", 00:12:19.851 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:19.851 "is_configured": true, 00:12:19.851 "data_offset": 2048, 00:12:19.851 "data_size": 63488 00:12:19.851 } 00:12:19.851 ] 00:12:19.851 }' 00:12:19.851 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.851 06:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.420 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:20.420 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:20.420 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:20.420 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:20.420 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:20.420 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:20.420 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:20.420 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:20.420 06:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.420 06:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.420 [2024-12-06 06:38:38.798348] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:20.420 06:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.420 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:20.420 "name": "raid_bdev1", 00:12:20.420 "aliases": [ 00:12:20.420 "c7778609-f698-47e1-b595-2f0862e44c58" 00:12:20.420 ], 00:12:20.420 "product_name": "Raid Volume", 00:12:20.420 "block_size": 512, 00:12:20.420 "num_blocks": 126976, 00:12:20.420 "uuid": "c7778609-f698-47e1-b595-2f0862e44c58", 00:12:20.420 "assigned_rate_limits": { 00:12:20.420 "rw_ios_per_sec": 0, 00:12:20.420 "rw_mbytes_per_sec": 0, 00:12:20.420 "r_mbytes_per_sec": 0, 00:12:20.420 "w_mbytes_per_sec": 0 00:12:20.420 }, 00:12:20.420 "claimed": false, 00:12:20.420 "zoned": false, 00:12:20.420 "supported_io_types": { 00:12:20.420 "read": true, 00:12:20.420 "write": true, 00:12:20.420 "unmap": true, 00:12:20.420 "flush": true, 00:12:20.420 "reset": true, 00:12:20.420 "nvme_admin": false, 00:12:20.420 "nvme_io": false, 00:12:20.420 "nvme_io_md": false, 00:12:20.420 "write_zeroes": true, 00:12:20.420 "zcopy": false, 00:12:20.420 "get_zone_info": false, 00:12:20.420 "zone_management": false, 00:12:20.420 "zone_append": false, 00:12:20.420 "compare": false, 00:12:20.420 "compare_and_write": false, 00:12:20.420 "abort": false, 00:12:20.420 "seek_hole": false, 00:12:20.420 "seek_data": false, 00:12:20.420 "copy": false, 00:12:20.420 "nvme_iov_md": false 00:12:20.420 }, 00:12:20.420 "memory_domains": [ 00:12:20.420 { 00:12:20.420 "dma_device_id": "system", 00:12:20.420 "dma_device_type": 1 00:12:20.420 }, 00:12:20.420 { 00:12:20.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.420 "dma_device_type": 2 00:12:20.420 }, 00:12:20.420 { 00:12:20.420 "dma_device_id": "system", 00:12:20.420 "dma_device_type": 1 00:12:20.420 }, 00:12:20.420 { 00:12:20.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.420 "dma_device_type": 2 00:12:20.420 } 00:12:20.420 ], 00:12:20.420 "driver_specific": { 00:12:20.420 "raid": { 00:12:20.420 "uuid": "c7778609-f698-47e1-b595-2f0862e44c58", 00:12:20.420 "strip_size_kb": 64, 00:12:20.420 "state": "online", 00:12:20.420 "raid_level": "concat", 00:12:20.420 "superblock": true, 00:12:20.420 "num_base_bdevs": 2, 00:12:20.420 "num_base_bdevs_discovered": 2, 00:12:20.420 "num_base_bdevs_operational": 2, 00:12:20.420 "base_bdevs_list": [ 00:12:20.420 { 00:12:20.420 "name": "pt1", 00:12:20.420 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:20.420 "is_configured": true, 00:12:20.420 "data_offset": 2048, 00:12:20.420 "data_size": 63488 00:12:20.420 }, 00:12:20.420 { 00:12:20.420 "name": "pt2", 00:12:20.420 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:20.420 "is_configured": true, 00:12:20.420 "data_offset": 2048, 00:12:20.420 "data_size": 63488 00:12:20.420 } 00:12:20.420 ] 00:12:20.420 } 00:12:20.420 } 00:12:20.420 }' 00:12:20.420 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:20.420 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:20.420 pt2' 00:12:20.420 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.420 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:20.420 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.420 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:20.420 06:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.420 06:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.420 06:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.420 06:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.420 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.420 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.420 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.420 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:20.420 06:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.420 06:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.420 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.420 06:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.420 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.420 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.679 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:20.679 06:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.679 06:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.679 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:20.679 [2024-12-06 06:38:39.070386] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:20.679 06:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.679 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c7778609-f698-47e1-b595-2f0862e44c58 '!=' c7778609-f698-47e1-b595-2f0862e44c58 ']' 00:12:20.679 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:12:20.679 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:20.679 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:20.679 06:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62350 00:12:20.679 06:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62350 ']' 00:12:20.679 06:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62350 00:12:20.679 06:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:20.679 06:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:20.679 06:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62350 00:12:20.679 killing process with pid 62350 00:12:20.679 06:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:20.679 06:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:20.679 06:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62350' 00:12:20.679 06:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62350 00:12:20.679 [2024-12-06 06:38:39.156343] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:20.679 06:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62350 00:12:20.679 [2024-12-06 06:38:39.156454] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:20.679 [2024-12-06 06:38:39.156536] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:20.679 [2024-12-06 06:38:39.156561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:20.938 [2024-12-06 06:38:39.343612] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:21.872 06:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:21.872 00:12:21.872 real 0m4.864s 00:12:21.872 user 0m7.109s 00:12:21.872 sys 0m0.735s 00:12:21.872 06:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.872 06:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.872 ************************************ 00:12:21.872 END TEST raid_superblock_test 00:12:21.872 ************************************ 00:12:21.872 06:38:40 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:12:21.872 06:38:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:21.872 06:38:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.872 06:38:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:21.872 ************************************ 00:12:21.872 START TEST raid_read_error_test 00:12:21.872 ************************************ 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.W07GuUyHZY 00:12:21.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62567 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62567 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62567 ']' 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:21.872 06:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.130 [2024-12-06 06:38:40.567796] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:12:22.130 [2024-12-06 06:38:40.567979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62567 ] 00:12:22.130 [2024-12-06 06:38:40.756848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.417 [2024-12-06 06:38:40.912169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.676 [2024-12-06 06:38:41.116096] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.676 [2024-12-06 06:38:41.116178] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.243 BaseBdev1_malloc 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.243 true 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.243 [2024-12-06 06:38:41.651360] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:23.243 [2024-12-06 06:38:41.651584] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.243 [2024-12-06 06:38:41.651626] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:23.243 [2024-12-06 06:38:41.651646] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.243 [2024-12-06 06:38:41.654960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.243 [2024-12-06 06:38:41.655177] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:23.243 BaseBdev1 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.243 BaseBdev2_malloc 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.243 true 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.243 [2024-12-06 06:38:41.712341] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:23.243 [2024-12-06 06:38:41.712428] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.243 [2024-12-06 06:38:41.712457] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:23.243 [2024-12-06 06:38:41.712475] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.243 [2024-12-06 06:38:41.715470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.243 [2024-12-06 06:38:41.715556] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:23.243 BaseBdev2 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.243 [2024-12-06 06:38:41.724438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:23.243 [2024-12-06 06:38:41.726939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:23.243 [2024-12-06 06:38:41.727216] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:23.243 [2024-12-06 06:38:41.727240] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:23.243 [2024-12-06 06:38:41.727591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:23.243 [2024-12-06 06:38:41.727854] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:23.243 [2024-12-06 06:38:41.727877] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:23.243 [2024-12-06 06:38:41.728081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.243 06:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.243 "name": "raid_bdev1", 00:12:23.243 "uuid": "60927c81-711a-408a-a171-9e1eb29a74f4", 00:12:23.243 "strip_size_kb": 64, 00:12:23.243 "state": "online", 00:12:23.243 "raid_level": "concat", 00:12:23.243 "superblock": true, 00:12:23.243 "num_base_bdevs": 2, 00:12:23.243 "num_base_bdevs_discovered": 2, 00:12:23.243 "num_base_bdevs_operational": 2, 00:12:23.243 "base_bdevs_list": [ 00:12:23.243 { 00:12:23.243 "name": "BaseBdev1", 00:12:23.243 "uuid": "fb6bb6ca-b4be-5bcf-a11e-a91e45e098b1", 00:12:23.243 "is_configured": true, 00:12:23.243 "data_offset": 2048, 00:12:23.243 "data_size": 63488 00:12:23.243 }, 00:12:23.243 { 00:12:23.243 "name": "BaseBdev2", 00:12:23.244 "uuid": "8f290132-337a-5ab2-9633-d78be509bf57", 00:12:23.244 "is_configured": true, 00:12:23.244 "data_offset": 2048, 00:12:23.244 "data_size": 63488 00:12:23.244 } 00:12:23.244 ] 00:12:23.244 }' 00:12:23.244 06:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.244 06:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.810 06:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:23.810 06:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:23.810 [2024-12-06 06:38:42.354288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:24.746 06:38:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:24.746 06:38:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.746 06:38:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.746 06:38:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.746 06:38:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:24.746 06:38:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:24.746 06:38:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:12:24.746 06:38:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:24.746 06:38:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.746 06:38:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.746 06:38:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:24.746 06:38:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.746 06:38:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:24.746 06:38:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.746 06:38:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.746 06:38:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.746 06:38:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.746 06:38:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.746 06:38:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.746 06:38:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.746 06:38:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.746 06:38:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.746 06:38:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.746 "name": "raid_bdev1", 00:12:24.746 "uuid": "60927c81-711a-408a-a171-9e1eb29a74f4", 00:12:24.746 "strip_size_kb": 64, 00:12:24.746 "state": "online", 00:12:24.746 "raid_level": "concat", 00:12:24.746 "superblock": true, 00:12:24.746 "num_base_bdevs": 2, 00:12:24.746 "num_base_bdevs_discovered": 2, 00:12:24.746 "num_base_bdevs_operational": 2, 00:12:24.746 "base_bdevs_list": [ 00:12:24.746 { 00:12:24.746 "name": "BaseBdev1", 00:12:24.746 "uuid": "fb6bb6ca-b4be-5bcf-a11e-a91e45e098b1", 00:12:24.746 "is_configured": true, 00:12:24.746 "data_offset": 2048, 00:12:24.746 "data_size": 63488 00:12:24.746 }, 00:12:24.746 { 00:12:24.746 "name": "BaseBdev2", 00:12:24.746 "uuid": "8f290132-337a-5ab2-9633-d78be509bf57", 00:12:24.746 "is_configured": true, 00:12:24.746 "data_offset": 2048, 00:12:24.746 "data_size": 63488 00:12:24.746 } 00:12:24.746 ] 00:12:24.746 }' 00:12:24.746 06:38:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.746 06:38:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.314 06:38:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:25.314 06:38:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.314 06:38:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.314 [2024-12-06 06:38:43.754166] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:25.314 [2024-12-06 06:38:43.754547] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:25.314 [2024-12-06 06:38:43.758214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:25.314 [2024-12-06 06:38:43.758516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.314 [2024-12-06 06:38:43.758749] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:25.314 [2024-12-06 06:38:43.758940] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, sta{ 00:12:25.314 "results": [ 00:12:25.314 { 00:12:25.314 "job": "raid_bdev1", 00:12:25.314 "core_mask": "0x1", 00:12:25.314 "workload": "randrw", 00:12:25.314 "percentage": 50, 00:12:25.314 "status": "finished", 00:12:25.314 "queue_depth": 1, 00:12:25.314 "io_size": 131072, 00:12:25.314 "runtime": 1.397561, 00:12:25.314 "iops": 10075.409946327924, 00:12:25.314 "mibps": 1259.4262432909904, 00:12:25.314 "io_failed": 1, 00:12:25.314 "io_timeout": 0, 00:12:25.314 "avg_latency_us": 138.31717911970148, 00:12:25.314 "min_latency_us": 39.33090909090909, 00:12:25.314 "max_latency_us": 1854.370909090909 00:12:25.314 } 00:12:25.314 ], 00:12:25.314 "core_count": 1 00:12:25.314 } 00:12:25.314 te offline 00:12:25.314 06:38:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.314 06:38:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62567 00:12:25.314 06:38:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62567 ']' 00:12:25.314 06:38:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62567 00:12:25.314 06:38:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:25.314 06:38:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:25.314 06:38:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62567 00:12:25.314 killing process with pid 62567 00:12:25.314 06:38:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:25.314 06:38:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:25.314 06:38:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62567' 00:12:25.314 06:38:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62567 00:12:25.314 [2024-12-06 06:38:43.803361] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:25.314 06:38:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62567 00:12:25.314 [2024-12-06 06:38:43.933611] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:26.699 06:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.W07GuUyHZY 00:12:26.699 06:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:26.699 06:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:26.699 06:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:12:26.699 06:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:26.699 06:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:26.699 06:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:26.699 ************************************ 00:12:26.699 END TEST raid_read_error_test 00:12:26.699 ************************************ 00:12:26.699 06:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:12:26.699 00:12:26.699 real 0m4.653s 00:12:26.699 user 0m5.859s 00:12:26.699 sys 0m0.552s 00:12:26.699 06:38:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.699 06:38:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.699 06:38:45 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:12:26.699 06:38:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:26.699 06:38:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.699 06:38:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:26.699 ************************************ 00:12:26.699 START TEST raid_write_error_test 00:12:26.699 ************************************ 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.AxNnpGi28I 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62713 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62713 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62713 ']' 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:26.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:26.699 06:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.699 [2024-12-06 06:38:45.271202] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:12:26.699 [2024-12-06 06:38:45.271375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62713 ] 00:12:26.957 [2024-12-06 06:38:45.460116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.957 [2024-12-06 06:38:45.593573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.215 [2024-12-06 06:38:45.801697] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:27.215 [2024-12-06 06:38:45.801959] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.784 BaseBdev1_malloc 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.784 true 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.784 [2024-12-06 06:38:46.332928] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:27.784 [2024-12-06 06:38:46.332999] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.784 [2024-12-06 06:38:46.333031] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:27.784 [2024-12-06 06:38:46.333051] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.784 [2024-12-06 06:38:46.336079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.784 [2024-12-06 06:38:46.336130] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:27.784 BaseBdev1 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.784 BaseBdev2_malloc 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.784 true 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.784 [2024-12-06 06:38:46.398246] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:27.784 [2024-12-06 06:38:46.398317] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.784 [2024-12-06 06:38:46.398343] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:27.784 [2024-12-06 06:38:46.398360] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.784 [2024-12-06 06:38:46.401272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.784 [2024-12-06 06:38:46.401470] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:27.784 BaseBdev2 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.784 06:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.784 [2024-12-06 06:38:46.406340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:27.784 [2024-12-06 06:38:46.408938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:27.784 [2024-12-06 06:38:46.409337] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:27.784 [2024-12-06 06:38:46.409481] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:27.784 [2024-12-06 06:38:46.409872] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:27.784 [2024-12-06 06:38:46.410222] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:27.784 [2024-12-06 06:38:46.410345] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:27.784 [2024-12-06 06:38:46.410830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.785 06:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.785 06:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:27.785 06:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.785 06:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.785 06:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:27.785 06:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:27.785 06:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:27.785 06:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.785 06:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.785 06:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.785 06:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.785 06:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.785 06:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.785 06:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.785 06:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.043 06:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.043 06:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.043 "name": "raid_bdev1", 00:12:28.043 "uuid": "b96d8677-1a15-40f0-9a45-ef51e452e8c0", 00:12:28.043 "strip_size_kb": 64, 00:12:28.043 "state": "online", 00:12:28.043 "raid_level": "concat", 00:12:28.043 "superblock": true, 00:12:28.043 "num_base_bdevs": 2, 00:12:28.043 "num_base_bdevs_discovered": 2, 00:12:28.043 "num_base_bdevs_operational": 2, 00:12:28.043 "base_bdevs_list": [ 00:12:28.043 { 00:12:28.043 "name": "BaseBdev1", 00:12:28.043 "uuid": "cddea2b4-4e88-53d8-841b-dcb473869327", 00:12:28.043 "is_configured": true, 00:12:28.043 "data_offset": 2048, 00:12:28.043 "data_size": 63488 00:12:28.043 }, 00:12:28.043 { 00:12:28.043 "name": "BaseBdev2", 00:12:28.044 "uuid": "7f2b823f-fd1d-5b97-81f5-8538c366e210", 00:12:28.044 "is_configured": true, 00:12:28.044 "data_offset": 2048, 00:12:28.044 "data_size": 63488 00:12:28.044 } 00:12:28.044 ] 00:12:28.044 }' 00:12:28.044 06:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.044 06:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.610 06:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:28.610 06:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:28.610 [2024-12-06 06:38:47.084500] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:29.546 06:38:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:29.546 06:38:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.546 06:38:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.546 06:38:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.546 06:38:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:29.546 06:38:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:29.546 06:38:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:12:29.546 06:38:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:29.546 06:38:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.546 06:38:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.546 06:38:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:29.546 06:38:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:29.546 06:38:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:29.546 06:38:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.546 06:38:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.546 06:38:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.546 06:38:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.546 06:38:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.546 06:38:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.546 06:38:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.546 06:38:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.546 06:38:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.546 06:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.546 "name": "raid_bdev1", 00:12:29.546 "uuid": "b96d8677-1a15-40f0-9a45-ef51e452e8c0", 00:12:29.546 "strip_size_kb": 64, 00:12:29.546 "state": "online", 00:12:29.546 "raid_level": "concat", 00:12:29.546 "superblock": true, 00:12:29.546 "num_base_bdevs": 2, 00:12:29.546 "num_base_bdevs_discovered": 2, 00:12:29.546 "num_base_bdevs_operational": 2, 00:12:29.546 "base_bdevs_list": [ 00:12:29.546 { 00:12:29.546 "name": "BaseBdev1", 00:12:29.546 "uuid": "cddea2b4-4e88-53d8-841b-dcb473869327", 00:12:29.546 "is_configured": true, 00:12:29.546 "data_offset": 2048, 00:12:29.546 "data_size": 63488 00:12:29.546 }, 00:12:29.546 { 00:12:29.546 "name": "BaseBdev2", 00:12:29.546 "uuid": "7f2b823f-fd1d-5b97-81f5-8538c366e210", 00:12:29.546 "is_configured": true, 00:12:29.546 "data_offset": 2048, 00:12:29.546 "data_size": 63488 00:12:29.546 } 00:12:29.546 ] 00:12:29.546 }' 00:12:29.546 06:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.546 06:38:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.113 06:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:30.113 06:38:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.113 06:38:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.113 [2024-12-06 06:38:48.524396] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:30.113 [2024-12-06 06:38:48.524606] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:30.113 { 00:12:30.113 "results": [ 00:12:30.113 { 00:12:30.113 "job": "raid_bdev1", 00:12:30.113 "core_mask": "0x1", 00:12:30.113 "workload": "randrw", 00:12:30.113 "percentage": 50, 00:12:30.113 "status": "finished", 00:12:30.113 "queue_depth": 1, 00:12:30.113 "io_size": 131072, 00:12:30.113 "runtime": 1.43753, 00:12:30.113 "iops": 10421.347728395234, 00:12:30.113 "mibps": 1302.6684660494043, 00:12:30.113 "io_failed": 1, 00:12:30.113 "io_timeout": 0, 00:12:30.113 "avg_latency_us": 133.461067705489, 00:12:30.113 "min_latency_us": 37.236363636363635, 00:12:30.113 "max_latency_us": 1966.08 00:12:30.113 } 00:12:30.113 ], 00:12:30.113 "core_count": 1 00:12:30.113 } 00:12:30.113 [2024-12-06 06:38:48.528259] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.113 [2024-12-06 06:38:48.528411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.113 [2024-12-06 06:38:48.528458] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:30.113 [2024-12-06 06:38:48.528480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:30.114 06:38:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.114 06:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62713 00:12:30.114 06:38:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62713 ']' 00:12:30.114 06:38:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62713 00:12:30.114 06:38:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:30.114 06:38:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:30.114 06:38:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62713 00:12:30.114 06:38:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:30.114 06:38:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:30.114 06:38:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62713' 00:12:30.114 killing process with pid 62713 00:12:30.114 06:38:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62713 00:12:30.114 [2024-12-06 06:38:48.572619] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:30.114 06:38:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62713 00:12:30.114 [2024-12-06 06:38:48.697720] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:31.531 06:38:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.AxNnpGi28I 00:12:31.531 06:38:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:31.531 06:38:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:31.531 ************************************ 00:12:31.531 END TEST raid_write_error_test 00:12:31.531 ************************************ 00:12:31.531 06:38:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:12:31.531 06:38:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:31.531 06:38:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:31.532 06:38:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:31.532 06:38:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:12:31.532 00:12:31.532 real 0m4.676s 00:12:31.532 user 0m5.915s 00:12:31.532 sys 0m0.560s 00:12:31.532 06:38:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.532 06:38:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.532 06:38:49 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:31.532 06:38:49 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:12:31.532 06:38:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:31.532 06:38:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.532 06:38:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:31.532 ************************************ 00:12:31.532 START TEST raid_state_function_test 00:12:31.532 ************************************ 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:31.532 Process raid pid: 62856 00:12:31.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62856 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62856' 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62856 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62856 ']' 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:31.532 06:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.532 [2024-12-06 06:38:49.982674] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:12:31.532 [2024-12-06 06:38:49.983126] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.532 [2024-12-06 06:38:50.162055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.790 [2024-12-06 06:38:50.297147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.050 [2024-12-06 06:38:50.508062] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.050 [2024-12-06 06:38:50.508319] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.309 06:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:32.309 06:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:32.309 06:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:32.309 06:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.310 06:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.568 [2024-12-06 06:38:50.955224] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:32.569 [2024-12-06 06:38:50.955294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:32.569 [2024-12-06 06:38:50.955312] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:32.569 [2024-12-06 06:38:50.955329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:32.569 06:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.569 06:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:32.569 06:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.569 06:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.569 06:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.569 06:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.569 06:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:32.569 06:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.569 06:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.569 06:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.569 06:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.569 06:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.569 06:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.569 06:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.569 06:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.569 06:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.569 06:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.569 "name": "Existed_Raid", 00:12:32.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.569 "strip_size_kb": 0, 00:12:32.569 "state": "configuring", 00:12:32.569 "raid_level": "raid1", 00:12:32.569 "superblock": false, 00:12:32.569 "num_base_bdevs": 2, 00:12:32.569 "num_base_bdevs_discovered": 0, 00:12:32.569 "num_base_bdevs_operational": 2, 00:12:32.569 "base_bdevs_list": [ 00:12:32.569 { 00:12:32.569 "name": "BaseBdev1", 00:12:32.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.569 "is_configured": false, 00:12:32.569 "data_offset": 0, 00:12:32.569 "data_size": 0 00:12:32.569 }, 00:12:32.569 { 00:12:32.569 "name": "BaseBdev2", 00:12:32.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.569 "is_configured": false, 00:12:32.569 "data_offset": 0, 00:12:32.569 "data_size": 0 00:12:32.569 } 00:12:32.569 ] 00:12:32.569 }' 00:12:32.569 06:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.569 06:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.828 06:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:32.828 06:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.828 06:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.828 [2024-12-06 06:38:51.459308] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:32.828 [2024-12-06 06:38:51.459351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:32.828 06:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.828 06:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:32.828 06:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.828 06:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.828 [2024-12-06 06:38:51.467282] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:32.828 [2024-12-06 06:38:51.467336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:32.828 [2024-12-06 06:38:51.467352] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:32.828 [2024-12-06 06:38:51.467371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:32.828 06:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.828 06:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:32.828 06:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.828 06:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.088 [2024-12-06 06:38:51.512681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:33.088 BaseBdev1 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.088 [ 00:12:33.088 { 00:12:33.088 "name": "BaseBdev1", 00:12:33.088 "aliases": [ 00:12:33.088 "a4df8676-d51a-44b5-88c1-8a92d9a6366e" 00:12:33.088 ], 00:12:33.088 "product_name": "Malloc disk", 00:12:33.088 "block_size": 512, 00:12:33.088 "num_blocks": 65536, 00:12:33.088 "uuid": "a4df8676-d51a-44b5-88c1-8a92d9a6366e", 00:12:33.088 "assigned_rate_limits": { 00:12:33.088 "rw_ios_per_sec": 0, 00:12:33.088 "rw_mbytes_per_sec": 0, 00:12:33.088 "r_mbytes_per_sec": 0, 00:12:33.088 "w_mbytes_per_sec": 0 00:12:33.088 }, 00:12:33.088 "claimed": true, 00:12:33.088 "claim_type": "exclusive_write", 00:12:33.088 "zoned": false, 00:12:33.088 "supported_io_types": { 00:12:33.088 "read": true, 00:12:33.088 "write": true, 00:12:33.088 "unmap": true, 00:12:33.088 "flush": true, 00:12:33.088 "reset": true, 00:12:33.088 "nvme_admin": false, 00:12:33.088 "nvme_io": false, 00:12:33.088 "nvme_io_md": false, 00:12:33.088 "write_zeroes": true, 00:12:33.088 "zcopy": true, 00:12:33.088 "get_zone_info": false, 00:12:33.088 "zone_management": false, 00:12:33.088 "zone_append": false, 00:12:33.088 "compare": false, 00:12:33.088 "compare_and_write": false, 00:12:33.088 "abort": true, 00:12:33.088 "seek_hole": false, 00:12:33.088 "seek_data": false, 00:12:33.088 "copy": true, 00:12:33.088 "nvme_iov_md": false 00:12:33.088 }, 00:12:33.088 "memory_domains": [ 00:12:33.088 { 00:12:33.088 "dma_device_id": "system", 00:12:33.088 "dma_device_type": 1 00:12:33.088 }, 00:12:33.088 { 00:12:33.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.088 "dma_device_type": 2 00:12:33.088 } 00:12:33.088 ], 00:12:33.088 "driver_specific": {} 00:12:33.088 } 00:12:33.088 ] 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.088 "name": "Existed_Raid", 00:12:33.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.088 "strip_size_kb": 0, 00:12:33.088 "state": "configuring", 00:12:33.088 "raid_level": "raid1", 00:12:33.088 "superblock": false, 00:12:33.088 "num_base_bdevs": 2, 00:12:33.088 "num_base_bdevs_discovered": 1, 00:12:33.088 "num_base_bdevs_operational": 2, 00:12:33.088 "base_bdevs_list": [ 00:12:33.088 { 00:12:33.088 "name": "BaseBdev1", 00:12:33.088 "uuid": "a4df8676-d51a-44b5-88c1-8a92d9a6366e", 00:12:33.088 "is_configured": true, 00:12:33.088 "data_offset": 0, 00:12:33.088 "data_size": 65536 00:12:33.088 }, 00:12:33.088 { 00:12:33.088 "name": "BaseBdev2", 00:12:33.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.088 "is_configured": false, 00:12:33.088 "data_offset": 0, 00:12:33.088 "data_size": 0 00:12:33.088 } 00:12:33.088 ] 00:12:33.088 }' 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.088 06:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.657 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:33.657 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.657 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.657 [2024-12-06 06:38:52.088902] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:33.657 [2024-12-06 06:38:52.088964] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:33.657 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.657 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:33.657 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.657 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.657 [2024-12-06 06:38:52.096930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:33.657 [2024-12-06 06:38:52.099452] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:33.657 [2024-12-06 06:38:52.099506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:33.657 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.657 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:33.657 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:33.657 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:33.657 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.657 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.657 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.657 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.657 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:33.657 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.657 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.657 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.657 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.657 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.657 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.657 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.657 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.657 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.657 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.657 "name": "Existed_Raid", 00:12:33.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.657 "strip_size_kb": 0, 00:12:33.657 "state": "configuring", 00:12:33.657 "raid_level": "raid1", 00:12:33.657 "superblock": false, 00:12:33.657 "num_base_bdevs": 2, 00:12:33.657 "num_base_bdevs_discovered": 1, 00:12:33.657 "num_base_bdevs_operational": 2, 00:12:33.657 "base_bdevs_list": [ 00:12:33.657 { 00:12:33.657 "name": "BaseBdev1", 00:12:33.657 "uuid": "a4df8676-d51a-44b5-88c1-8a92d9a6366e", 00:12:33.657 "is_configured": true, 00:12:33.657 "data_offset": 0, 00:12:33.657 "data_size": 65536 00:12:33.657 }, 00:12:33.657 { 00:12:33.657 "name": "BaseBdev2", 00:12:33.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.657 "is_configured": false, 00:12:33.657 "data_offset": 0, 00:12:33.657 "data_size": 0 00:12:33.657 } 00:12:33.657 ] 00:12:33.657 }' 00:12:33.657 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.657 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.226 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:34.226 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.226 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.226 [2024-12-06 06:38:52.655162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:34.226 [2024-12-06 06:38:52.655427] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:34.226 [2024-12-06 06:38:52.655481] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:34.226 [2024-12-06 06:38:52.656027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:34.226 [2024-12-06 06:38:52.656378] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:34.226 [2024-12-06 06:38:52.656518] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:34.226 [2024-12-06 06:38:52.656864] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.226 BaseBdev2 00:12:34.226 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.226 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:34.226 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:34.226 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:34.226 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:34.226 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:34.227 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:34.227 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:34.227 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.227 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.227 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.227 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:34.227 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.227 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.227 [ 00:12:34.227 { 00:12:34.227 "name": "BaseBdev2", 00:12:34.227 "aliases": [ 00:12:34.227 "0687fb8e-9082-4191-9893-3d9b99454b12" 00:12:34.227 ], 00:12:34.227 "product_name": "Malloc disk", 00:12:34.227 "block_size": 512, 00:12:34.227 "num_blocks": 65536, 00:12:34.227 "uuid": "0687fb8e-9082-4191-9893-3d9b99454b12", 00:12:34.227 "assigned_rate_limits": { 00:12:34.227 "rw_ios_per_sec": 0, 00:12:34.227 "rw_mbytes_per_sec": 0, 00:12:34.227 "r_mbytes_per_sec": 0, 00:12:34.227 "w_mbytes_per_sec": 0 00:12:34.227 }, 00:12:34.227 "claimed": true, 00:12:34.227 "claim_type": "exclusive_write", 00:12:34.227 "zoned": false, 00:12:34.227 "supported_io_types": { 00:12:34.227 "read": true, 00:12:34.227 "write": true, 00:12:34.227 "unmap": true, 00:12:34.227 "flush": true, 00:12:34.227 "reset": true, 00:12:34.227 "nvme_admin": false, 00:12:34.227 "nvme_io": false, 00:12:34.227 "nvme_io_md": false, 00:12:34.227 "write_zeroes": true, 00:12:34.227 "zcopy": true, 00:12:34.227 "get_zone_info": false, 00:12:34.227 "zone_management": false, 00:12:34.227 "zone_append": false, 00:12:34.227 "compare": false, 00:12:34.227 "compare_and_write": false, 00:12:34.227 "abort": true, 00:12:34.227 "seek_hole": false, 00:12:34.227 "seek_data": false, 00:12:34.227 "copy": true, 00:12:34.227 "nvme_iov_md": false 00:12:34.227 }, 00:12:34.227 "memory_domains": [ 00:12:34.227 { 00:12:34.227 "dma_device_id": "system", 00:12:34.227 "dma_device_type": 1 00:12:34.227 }, 00:12:34.227 { 00:12:34.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.227 "dma_device_type": 2 00:12:34.227 } 00:12:34.227 ], 00:12:34.227 "driver_specific": {} 00:12:34.227 } 00:12:34.227 ] 00:12:34.227 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.227 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:34.227 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:34.227 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:34.227 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:34.227 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.227 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.227 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.227 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.227 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:34.227 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.227 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.227 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.227 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.227 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.227 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.227 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.227 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.227 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.227 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.227 "name": "Existed_Raid", 00:12:34.227 "uuid": "689d05ef-2f02-42a7-8ef5-12768e3377ca", 00:12:34.227 "strip_size_kb": 0, 00:12:34.227 "state": "online", 00:12:34.227 "raid_level": "raid1", 00:12:34.227 "superblock": false, 00:12:34.227 "num_base_bdevs": 2, 00:12:34.227 "num_base_bdevs_discovered": 2, 00:12:34.227 "num_base_bdevs_operational": 2, 00:12:34.227 "base_bdevs_list": [ 00:12:34.227 { 00:12:34.227 "name": "BaseBdev1", 00:12:34.227 "uuid": "a4df8676-d51a-44b5-88c1-8a92d9a6366e", 00:12:34.227 "is_configured": true, 00:12:34.227 "data_offset": 0, 00:12:34.227 "data_size": 65536 00:12:34.227 }, 00:12:34.227 { 00:12:34.227 "name": "BaseBdev2", 00:12:34.227 "uuid": "0687fb8e-9082-4191-9893-3d9b99454b12", 00:12:34.227 "is_configured": true, 00:12:34.227 "data_offset": 0, 00:12:34.227 "data_size": 65536 00:12:34.227 } 00:12:34.227 ] 00:12:34.227 }' 00:12:34.227 06:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.227 06:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.795 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:34.795 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:34.795 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:34.795 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:34.795 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:34.795 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:34.795 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:34.795 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:34.795 06:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.795 06:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.795 [2024-12-06 06:38:53.235738] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:34.795 06:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.795 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:34.795 "name": "Existed_Raid", 00:12:34.795 "aliases": [ 00:12:34.795 "689d05ef-2f02-42a7-8ef5-12768e3377ca" 00:12:34.795 ], 00:12:34.795 "product_name": "Raid Volume", 00:12:34.795 "block_size": 512, 00:12:34.795 "num_blocks": 65536, 00:12:34.795 "uuid": "689d05ef-2f02-42a7-8ef5-12768e3377ca", 00:12:34.795 "assigned_rate_limits": { 00:12:34.795 "rw_ios_per_sec": 0, 00:12:34.795 "rw_mbytes_per_sec": 0, 00:12:34.795 "r_mbytes_per_sec": 0, 00:12:34.795 "w_mbytes_per_sec": 0 00:12:34.795 }, 00:12:34.795 "claimed": false, 00:12:34.795 "zoned": false, 00:12:34.795 "supported_io_types": { 00:12:34.795 "read": true, 00:12:34.795 "write": true, 00:12:34.795 "unmap": false, 00:12:34.795 "flush": false, 00:12:34.795 "reset": true, 00:12:34.795 "nvme_admin": false, 00:12:34.795 "nvme_io": false, 00:12:34.795 "nvme_io_md": false, 00:12:34.795 "write_zeroes": true, 00:12:34.795 "zcopy": false, 00:12:34.795 "get_zone_info": false, 00:12:34.795 "zone_management": false, 00:12:34.795 "zone_append": false, 00:12:34.795 "compare": false, 00:12:34.795 "compare_and_write": false, 00:12:34.795 "abort": false, 00:12:34.795 "seek_hole": false, 00:12:34.795 "seek_data": false, 00:12:34.795 "copy": false, 00:12:34.795 "nvme_iov_md": false 00:12:34.795 }, 00:12:34.795 "memory_domains": [ 00:12:34.795 { 00:12:34.795 "dma_device_id": "system", 00:12:34.795 "dma_device_type": 1 00:12:34.795 }, 00:12:34.795 { 00:12:34.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.795 "dma_device_type": 2 00:12:34.795 }, 00:12:34.795 { 00:12:34.795 "dma_device_id": "system", 00:12:34.795 "dma_device_type": 1 00:12:34.795 }, 00:12:34.795 { 00:12:34.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.795 "dma_device_type": 2 00:12:34.795 } 00:12:34.795 ], 00:12:34.795 "driver_specific": { 00:12:34.795 "raid": { 00:12:34.795 "uuid": "689d05ef-2f02-42a7-8ef5-12768e3377ca", 00:12:34.795 "strip_size_kb": 0, 00:12:34.795 "state": "online", 00:12:34.795 "raid_level": "raid1", 00:12:34.795 "superblock": false, 00:12:34.795 "num_base_bdevs": 2, 00:12:34.795 "num_base_bdevs_discovered": 2, 00:12:34.795 "num_base_bdevs_operational": 2, 00:12:34.795 "base_bdevs_list": [ 00:12:34.795 { 00:12:34.795 "name": "BaseBdev1", 00:12:34.795 "uuid": "a4df8676-d51a-44b5-88c1-8a92d9a6366e", 00:12:34.795 "is_configured": true, 00:12:34.795 "data_offset": 0, 00:12:34.795 "data_size": 65536 00:12:34.795 }, 00:12:34.795 { 00:12:34.795 "name": "BaseBdev2", 00:12:34.795 "uuid": "0687fb8e-9082-4191-9893-3d9b99454b12", 00:12:34.795 "is_configured": true, 00:12:34.795 "data_offset": 0, 00:12:34.795 "data_size": 65536 00:12:34.795 } 00:12:34.795 ] 00:12:34.795 } 00:12:34.795 } 00:12:34.795 }' 00:12:34.795 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:34.795 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:34.795 BaseBdev2' 00:12:34.795 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.795 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:34.795 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.795 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:34.795 06:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.795 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.795 06:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.795 06:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.795 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.795 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.795 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.795 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:34.795 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.795 06:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.795 06:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.055 06:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.055 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:35.055 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:35.055 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:35.055 06:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.055 06:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.055 [2024-12-06 06:38:53.487490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:35.055 06:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.055 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:35.055 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:35.055 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:35.055 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:35.055 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:35.055 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:12:35.055 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.055 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.055 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.055 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.055 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:35.055 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.055 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.055 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.055 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.055 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.055 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.055 06:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.055 06:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.055 06:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.055 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.055 "name": "Existed_Raid", 00:12:35.055 "uuid": "689d05ef-2f02-42a7-8ef5-12768e3377ca", 00:12:35.055 "strip_size_kb": 0, 00:12:35.055 "state": "online", 00:12:35.055 "raid_level": "raid1", 00:12:35.055 "superblock": false, 00:12:35.055 "num_base_bdevs": 2, 00:12:35.055 "num_base_bdevs_discovered": 1, 00:12:35.055 "num_base_bdevs_operational": 1, 00:12:35.055 "base_bdevs_list": [ 00:12:35.055 { 00:12:35.055 "name": null, 00:12:35.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.055 "is_configured": false, 00:12:35.055 "data_offset": 0, 00:12:35.055 "data_size": 65536 00:12:35.055 }, 00:12:35.055 { 00:12:35.055 "name": "BaseBdev2", 00:12:35.055 "uuid": "0687fb8e-9082-4191-9893-3d9b99454b12", 00:12:35.055 "is_configured": true, 00:12:35.055 "data_offset": 0, 00:12:35.055 "data_size": 65536 00:12:35.055 } 00:12:35.055 ] 00:12:35.055 }' 00:12:35.055 06:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.055 06:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.692 06:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:35.692 06:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:35.692 06:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:35.692 06:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.692 06:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.692 06:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.692 06:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.692 06:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:35.692 06:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:35.692 06:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:35.692 06:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.692 06:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.692 [2024-12-06 06:38:54.137203] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:35.692 [2024-12-06 06:38:54.137323] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:35.693 [2024-12-06 06:38:54.223346] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:35.693 [2024-12-06 06:38:54.223627] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:35.693 [2024-12-06 06:38:54.223663] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:35.693 06:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.693 06:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:35.693 06:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:35.693 06:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.693 06:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.693 06:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:35.693 06:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.693 06:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.693 06:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:35.693 06:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:35.693 06:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:12:35.693 06:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62856 00:12:35.693 06:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62856 ']' 00:12:35.693 06:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62856 00:12:35.693 06:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:35.693 06:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:35.693 06:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62856 00:12:35.693 killing process with pid 62856 00:12:35.693 06:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:35.693 06:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:35.693 06:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62856' 00:12:35.693 06:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62856 00:12:35.693 [2024-12-06 06:38:54.316158] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:35.693 06:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62856 00:12:35.693 [2024-12-06 06:38:54.330819] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:37.069 00:12:37.069 real 0m5.507s 00:12:37.069 user 0m8.350s 00:12:37.069 sys 0m0.751s 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.069 ************************************ 00:12:37.069 END TEST raid_state_function_test 00:12:37.069 ************************************ 00:12:37.069 06:38:55 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:12:37.069 06:38:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:37.069 06:38:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.069 06:38:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:37.069 ************************************ 00:12:37.069 START TEST raid_state_function_test_sb 00:12:37.069 ************************************ 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:37.069 Process raid pid: 63115 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63115 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63115' 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63115 00:12:37.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63115 ']' 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:37.069 06:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.069 [2024-12-06 06:38:55.546717] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:12:37.069 [2024-12-06 06:38:55.547130] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.395 [2024-12-06 06:38:55.733017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.395 [2024-12-06 06:38:55.864488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.658 [2024-12-06 06:38:56.072165] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:37.658 [2024-12-06 06:38:56.072200] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.226 06:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.226 06:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:38.226 06:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:38.226 06:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.226 06:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.226 [2024-12-06 06:38:56.588918] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:38.226 [2024-12-06 06:38:56.588987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:38.226 [2024-12-06 06:38:56.589004] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:38.226 [2024-12-06 06:38:56.589020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:38.226 06:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.226 06:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:38.226 06:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.226 06:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.226 06:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.226 06:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.226 06:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:38.226 06:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.226 06:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.226 06:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.226 06:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.226 06:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.226 06:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.226 06:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.226 06:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.226 06:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.226 06:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.226 "name": "Existed_Raid", 00:12:38.226 "uuid": "080b8fc5-b25c-4099-91aa-e1485b939f1e", 00:12:38.226 "strip_size_kb": 0, 00:12:38.226 "state": "configuring", 00:12:38.226 "raid_level": "raid1", 00:12:38.226 "superblock": true, 00:12:38.226 "num_base_bdevs": 2, 00:12:38.226 "num_base_bdevs_discovered": 0, 00:12:38.226 "num_base_bdevs_operational": 2, 00:12:38.226 "base_bdevs_list": [ 00:12:38.226 { 00:12:38.226 "name": "BaseBdev1", 00:12:38.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.226 "is_configured": false, 00:12:38.226 "data_offset": 0, 00:12:38.226 "data_size": 0 00:12:38.226 }, 00:12:38.226 { 00:12:38.226 "name": "BaseBdev2", 00:12:38.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.226 "is_configured": false, 00:12:38.226 "data_offset": 0, 00:12:38.226 "data_size": 0 00:12:38.226 } 00:12:38.226 ] 00:12:38.226 }' 00:12:38.226 06:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.226 06:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.792 [2024-12-06 06:38:57.157032] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:38.792 [2024-12-06 06:38:57.157229] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.792 [2024-12-06 06:38:57.165004] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:38.792 [2024-12-06 06:38:57.165058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:38.792 [2024-12-06 06:38:57.165074] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:38.792 [2024-12-06 06:38:57.165093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.792 [2024-12-06 06:38:57.210110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:38.792 BaseBdev1 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.792 [ 00:12:38.792 { 00:12:38.792 "name": "BaseBdev1", 00:12:38.792 "aliases": [ 00:12:38.792 "e0300db1-06a0-4488-a915-5f1443cd3057" 00:12:38.792 ], 00:12:38.792 "product_name": "Malloc disk", 00:12:38.792 "block_size": 512, 00:12:38.792 "num_blocks": 65536, 00:12:38.792 "uuid": "e0300db1-06a0-4488-a915-5f1443cd3057", 00:12:38.792 "assigned_rate_limits": { 00:12:38.792 "rw_ios_per_sec": 0, 00:12:38.792 "rw_mbytes_per_sec": 0, 00:12:38.792 "r_mbytes_per_sec": 0, 00:12:38.792 "w_mbytes_per_sec": 0 00:12:38.792 }, 00:12:38.792 "claimed": true, 00:12:38.792 "claim_type": "exclusive_write", 00:12:38.792 "zoned": false, 00:12:38.792 "supported_io_types": { 00:12:38.792 "read": true, 00:12:38.792 "write": true, 00:12:38.792 "unmap": true, 00:12:38.792 "flush": true, 00:12:38.792 "reset": true, 00:12:38.792 "nvme_admin": false, 00:12:38.792 "nvme_io": false, 00:12:38.792 "nvme_io_md": false, 00:12:38.792 "write_zeroes": true, 00:12:38.792 "zcopy": true, 00:12:38.792 "get_zone_info": false, 00:12:38.792 "zone_management": false, 00:12:38.792 "zone_append": false, 00:12:38.792 "compare": false, 00:12:38.792 "compare_and_write": false, 00:12:38.792 "abort": true, 00:12:38.792 "seek_hole": false, 00:12:38.792 "seek_data": false, 00:12:38.792 "copy": true, 00:12:38.792 "nvme_iov_md": false 00:12:38.792 }, 00:12:38.792 "memory_domains": [ 00:12:38.792 { 00:12:38.792 "dma_device_id": "system", 00:12:38.792 "dma_device_type": 1 00:12:38.792 }, 00:12:38.792 { 00:12:38.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.792 "dma_device_type": 2 00:12:38.792 } 00:12:38.792 ], 00:12:38.792 "driver_specific": {} 00:12:38.792 } 00:12:38.792 ] 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.792 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.792 "name": "Existed_Raid", 00:12:38.792 "uuid": "31b15ded-f365-4b14-b114-16e2b14e4ca5", 00:12:38.792 "strip_size_kb": 0, 00:12:38.792 "state": "configuring", 00:12:38.792 "raid_level": "raid1", 00:12:38.792 "superblock": true, 00:12:38.792 "num_base_bdevs": 2, 00:12:38.792 "num_base_bdevs_discovered": 1, 00:12:38.792 "num_base_bdevs_operational": 2, 00:12:38.792 "base_bdevs_list": [ 00:12:38.792 { 00:12:38.792 "name": "BaseBdev1", 00:12:38.793 "uuid": "e0300db1-06a0-4488-a915-5f1443cd3057", 00:12:38.793 "is_configured": true, 00:12:38.793 "data_offset": 2048, 00:12:38.793 "data_size": 63488 00:12:38.793 }, 00:12:38.793 { 00:12:38.793 "name": "BaseBdev2", 00:12:38.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.793 "is_configured": false, 00:12:38.793 "data_offset": 0, 00:12:38.793 "data_size": 0 00:12:38.793 } 00:12:38.793 ] 00:12:38.793 }' 00:12:38.793 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.793 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.359 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:39.359 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.359 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.359 [2024-12-06 06:38:57.782331] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:39.359 [2024-12-06 06:38:57.782396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:39.359 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.359 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:39.359 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.359 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.359 [2024-12-06 06:38:57.790641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:39.359 [2024-12-06 06:38:57.796280] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:39.359 [2024-12-06 06:38:57.796397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:39.359 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.359 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:39.359 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:39.359 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:39.359 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.359 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:39.359 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.359 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.359 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:39.359 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.359 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.359 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.359 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.359 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.359 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.359 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.359 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.359 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.359 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.359 "name": "Existed_Raid", 00:12:39.359 "uuid": "37edb2d2-6e27-4b18-8fe8-580b22367992", 00:12:39.359 "strip_size_kb": 0, 00:12:39.359 "state": "configuring", 00:12:39.359 "raid_level": "raid1", 00:12:39.359 "superblock": true, 00:12:39.359 "num_base_bdevs": 2, 00:12:39.359 "num_base_bdevs_discovered": 1, 00:12:39.359 "num_base_bdevs_operational": 2, 00:12:39.359 "base_bdevs_list": [ 00:12:39.359 { 00:12:39.359 "name": "BaseBdev1", 00:12:39.359 "uuid": "e0300db1-06a0-4488-a915-5f1443cd3057", 00:12:39.359 "is_configured": true, 00:12:39.359 "data_offset": 2048, 00:12:39.359 "data_size": 63488 00:12:39.359 }, 00:12:39.359 { 00:12:39.359 "name": "BaseBdev2", 00:12:39.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.359 "is_configured": false, 00:12:39.359 "data_offset": 0, 00:12:39.359 "data_size": 0 00:12:39.359 } 00:12:39.359 ] 00:12:39.359 }' 00:12:39.359 06:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.359 06:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.927 [2024-12-06 06:38:58.337023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:39.927 [2024-12-06 06:38:58.337358] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:39.927 [2024-12-06 06:38:58.337378] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:39.927 [2024-12-06 06:38:58.337712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:39.927 BaseBdev2 00:12:39.927 [2024-12-06 06:38:58.337931] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:39.927 [2024-12-06 06:38:58.337966] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:39.927 [2024-12-06 06:38:58.338141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.927 [ 00:12:39.927 { 00:12:39.927 "name": "BaseBdev2", 00:12:39.927 "aliases": [ 00:12:39.927 "7f76ea49-44e1-40ce-b7fe-c743378bf575" 00:12:39.927 ], 00:12:39.927 "product_name": "Malloc disk", 00:12:39.927 "block_size": 512, 00:12:39.927 "num_blocks": 65536, 00:12:39.927 "uuid": "7f76ea49-44e1-40ce-b7fe-c743378bf575", 00:12:39.927 "assigned_rate_limits": { 00:12:39.927 "rw_ios_per_sec": 0, 00:12:39.927 "rw_mbytes_per_sec": 0, 00:12:39.927 "r_mbytes_per_sec": 0, 00:12:39.927 "w_mbytes_per_sec": 0 00:12:39.927 }, 00:12:39.927 "claimed": true, 00:12:39.927 "claim_type": "exclusive_write", 00:12:39.927 "zoned": false, 00:12:39.927 "supported_io_types": { 00:12:39.927 "read": true, 00:12:39.927 "write": true, 00:12:39.927 "unmap": true, 00:12:39.927 "flush": true, 00:12:39.927 "reset": true, 00:12:39.927 "nvme_admin": false, 00:12:39.927 "nvme_io": false, 00:12:39.927 "nvme_io_md": false, 00:12:39.927 "write_zeroes": true, 00:12:39.927 "zcopy": true, 00:12:39.927 "get_zone_info": false, 00:12:39.927 "zone_management": false, 00:12:39.927 "zone_append": false, 00:12:39.927 "compare": false, 00:12:39.927 "compare_and_write": false, 00:12:39.927 "abort": true, 00:12:39.927 "seek_hole": false, 00:12:39.927 "seek_data": false, 00:12:39.927 "copy": true, 00:12:39.927 "nvme_iov_md": false 00:12:39.927 }, 00:12:39.927 "memory_domains": [ 00:12:39.927 { 00:12:39.927 "dma_device_id": "system", 00:12:39.927 "dma_device_type": 1 00:12:39.927 }, 00:12:39.927 { 00:12:39.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.927 "dma_device_type": 2 00:12:39.927 } 00:12:39.927 ], 00:12:39.927 "driver_specific": {} 00:12:39.927 } 00:12:39.927 ] 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.927 "name": "Existed_Raid", 00:12:39.927 "uuid": "37edb2d2-6e27-4b18-8fe8-580b22367992", 00:12:39.927 "strip_size_kb": 0, 00:12:39.927 "state": "online", 00:12:39.927 "raid_level": "raid1", 00:12:39.927 "superblock": true, 00:12:39.927 "num_base_bdevs": 2, 00:12:39.927 "num_base_bdevs_discovered": 2, 00:12:39.927 "num_base_bdevs_operational": 2, 00:12:39.927 "base_bdevs_list": [ 00:12:39.927 { 00:12:39.927 "name": "BaseBdev1", 00:12:39.927 "uuid": "e0300db1-06a0-4488-a915-5f1443cd3057", 00:12:39.927 "is_configured": true, 00:12:39.927 "data_offset": 2048, 00:12:39.927 "data_size": 63488 00:12:39.927 }, 00:12:39.927 { 00:12:39.927 "name": "BaseBdev2", 00:12:39.927 "uuid": "7f76ea49-44e1-40ce-b7fe-c743378bf575", 00:12:39.927 "is_configured": true, 00:12:39.927 "data_offset": 2048, 00:12:39.927 "data_size": 63488 00:12:39.927 } 00:12:39.927 ] 00:12:39.927 }' 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.927 06:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.565 06:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:40.565 06:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:40.565 06:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:40.565 06:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:40.565 06:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:40.565 06:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:40.565 06:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:40.565 06:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.565 06:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.565 06:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:40.565 [2024-12-06 06:38:58.925595] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:40.565 06:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.565 06:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:40.565 "name": "Existed_Raid", 00:12:40.565 "aliases": [ 00:12:40.565 "37edb2d2-6e27-4b18-8fe8-580b22367992" 00:12:40.565 ], 00:12:40.565 "product_name": "Raid Volume", 00:12:40.565 "block_size": 512, 00:12:40.565 "num_blocks": 63488, 00:12:40.565 "uuid": "37edb2d2-6e27-4b18-8fe8-580b22367992", 00:12:40.565 "assigned_rate_limits": { 00:12:40.565 "rw_ios_per_sec": 0, 00:12:40.565 "rw_mbytes_per_sec": 0, 00:12:40.565 "r_mbytes_per_sec": 0, 00:12:40.565 "w_mbytes_per_sec": 0 00:12:40.565 }, 00:12:40.565 "claimed": false, 00:12:40.565 "zoned": false, 00:12:40.565 "supported_io_types": { 00:12:40.565 "read": true, 00:12:40.565 "write": true, 00:12:40.565 "unmap": false, 00:12:40.565 "flush": false, 00:12:40.565 "reset": true, 00:12:40.565 "nvme_admin": false, 00:12:40.565 "nvme_io": false, 00:12:40.565 "nvme_io_md": false, 00:12:40.565 "write_zeroes": true, 00:12:40.565 "zcopy": false, 00:12:40.565 "get_zone_info": false, 00:12:40.565 "zone_management": false, 00:12:40.565 "zone_append": false, 00:12:40.565 "compare": false, 00:12:40.565 "compare_and_write": false, 00:12:40.565 "abort": false, 00:12:40.565 "seek_hole": false, 00:12:40.565 "seek_data": false, 00:12:40.565 "copy": false, 00:12:40.565 "nvme_iov_md": false 00:12:40.565 }, 00:12:40.565 "memory_domains": [ 00:12:40.565 { 00:12:40.565 "dma_device_id": "system", 00:12:40.565 "dma_device_type": 1 00:12:40.565 }, 00:12:40.565 { 00:12:40.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.565 "dma_device_type": 2 00:12:40.565 }, 00:12:40.565 { 00:12:40.565 "dma_device_id": "system", 00:12:40.565 "dma_device_type": 1 00:12:40.565 }, 00:12:40.565 { 00:12:40.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.565 "dma_device_type": 2 00:12:40.565 } 00:12:40.565 ], 00:12:40.565 "driver_specific": { 00:12:40.565 "raid": { 00:12:40.565 "uuid": "37edb2d2-6e27-4b18-8fe8-580b22367992", 00:12:40.565 "strip_size_kb": 0, 00:12:40.565 "state": "online", 00:12:40.565 "raid_level": "raid1", 00:12:40.565 "superblock": true, 00:12:40.565 "num_base_bdevs": 2, 00:12:40.565 "num_base_bdevs_discovered": 2, 00:12:40.565 "num_base_bdevs_operational": 2, 00:12:40.565 "base_bdevs_list": [ 00:12:40.565 { 00:12:40.565 "name": "BaseBdev1", 00:12:40.565 "uuid": "e0300db1-06a0-4488-a915-5f1443cd3057", 00:12:40.565 "is_configured": true, 00:12:40.565 "data_offset": 2048, 00:12:40.565 "data_size": 63488 00:12:40.565 }, 00:12:40.565 { 00:12:40.565 "name": "BaseBdev2", 00:12:40.565 "uuid": "7f76ea49-44e1-40ce-b7fe-c743378bf575", 00:12:40.565 "is_configured": true, 00:12:40.565 "data_offset": 2048, 00:12:40.565 "data_size": 63488 00:12:40.565 } 00:12:40.565 ] 00:12:40.565 } 00:12:40.565 } 00:12:40.565 }' 00:12:40.565 06:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:40.565 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:40.565 BaseBdev2' 00:12:40.565 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.565 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:40.565 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.565 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:40.565 06:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.565 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.565 06:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.565 06:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.565 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.565 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.565 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.565 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:40.565 06:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.565 06:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.565 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.565 06:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.565 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.565 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.565 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:40.565 06:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.565 06:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.565 [2024-12-06 06:38:59.205350] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:40.825 06:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.825 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:40.825 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:40.825 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:40.825 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:40.825 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:40.825 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:12:40.825 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.825 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.825 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.825 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.825 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:40.825 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.825 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.825 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.825 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.825 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.825 06:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.825 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.825 06:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.825 06:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.825 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.825 "name": "Existed_Raid", 00:12:40.825 "uuid": "37edb2d2-6e27-4b18-8fe8-580b22367992", 00:12:40.825 "strip_size_kb": 0, 00:12:40.825 "state": "online", 00:12:40.825 "raid_level": "raid1", 00:12:40.825 "superblock": true, 00:12:40.825 "num_base_bdevs": 2, 00:12:40.825 "num_base_bdevs_discovered": 1, 00:12:40.825 "num_base_bdevs_operational": 1, 00:12:40.825 "base_bdevs_list": [ 00:12:40.825 { 00:12:40.825 "name": null, 00:12:40.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.825 "is_configured": false, 00:12:40.825 "data_offset": 0, 00:12:40.825 "data_size": 63488 00:12:40.825 }, 00:12:40.825 { 00:12:40.825 "name": "BaseBdev2", 00:12:40.825 "uuid": "7f76ea49-44e1-40ce-b7fe-c743378bf575", 00:12:40.825 "is_configured": true, 00:12:40.825 "data_offset": 2048, 00:12:40.825 "data_size": 63488 00:12:40.825 } 00:12:40.825 ] 00:12:40.825 }' 00:12:40.825 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.825 06:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.394 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:41.394 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:41.394 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.394 06:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.394 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:41.394 06:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.394 06:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.394 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:41.394 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:41.394 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:41.394 06:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.394 06:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.394 [2024-12-06 06:38:59.873754] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:41.394 [2024-12-06 06:38:59.873888] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:41.394 [2024-12-06 06:38:59.959277] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:41.394 [2024-12-06 06:38:59.959354] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:41.394 [2024-12-06 06:38:59.959376] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:41.394 06:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.394 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:41.394 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:41.394 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.394 06:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.394 06:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.394 06:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:41.394 06:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.394 06:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:41.394 06:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:41.394 06:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:12:41.394 06:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63115 00:12:41.394 06:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63115 ']' 00:12:41.394 06:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63115 00:12:41.394 06:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:41.394 06:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:41.394 06:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63115 00:12:41.653 06:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:41.653 06:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:41.653 killing process with pid 63115 00:12:41.653 06:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63115' 00:12:41.653 06:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63115 00:12:41.653 [2024-12-06 06:39:00.049671] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:41.653 06:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63115 00:12:41.653 [2024-12-06 06:39:00.064222] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:42.592 06:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:42.592 00:12:42.592 real 0m5.692s 00:12:42.592 user 0m8.669s 00:12:42.592 sys 0m0.793s 00:12:42.592 06:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:42.592 ************************************ 00:12:42.592 END TEST raid_state_function_test_sb 00:12:42.592 06:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.592 ************************************ 00:12:42.592 06:39:01 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:12:42.592 06:39:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:42.592 06:39:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.592 06:39:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:42.592 ************************************ 00:12:42.592 START TEST raid_superblock_test 00:12:42.592 ************************************ 00:12:42.592 06:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:12:42.592 06:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:42.592 06:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:12:42.592 06:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:42.592 06:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:42.592 06:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:42.592 06:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:42.592 06:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:42.592 06:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:42.592 06:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:42.592 06:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:42.592 06:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:42.592 06:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:42.592 06:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:42.592 06:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:42.592 06:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:42.592 06:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63373 00:12:42.592 06:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:42.592 06:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63373 00:12:42.592 06:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63373 ']' 00:12:42.592 06:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.592 06:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:42.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.592 06:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.592 06:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:42.592 06:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.851 [2024-12-06 06:39:01.292080] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:12:42.852 [2024-12-06 06:39:01.292260] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63373 ] 00:12:42.852 [2024-12-06 06:39:01.483750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.110 [2024-12-06 06:39:01.647868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.370 [2024-12-06 06:39:01.858154] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.370 [2024-12-06 06:39:01.858231] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.629 06:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:43.629 06:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:43.629 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:43.629 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:43.629 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:43.629 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:43.629 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:43.629 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:43.629 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:43.629 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:43.629 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:43.629 06:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.629 06:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.888 malloc1 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.888 [2024-12-06 06:39:02.297377] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:43.888 [2024-12-06 06:39:02.297452] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.888 [2024-12-06 06:39:02.297486] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:43.888 [2024-12-06 06:39:02.297503] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.888 [2024-12-06 06:39:02.300403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.888 [2024-12-06 06:39:02.300465] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:43.888 pt1 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.888 malloc2 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.888 [2024-12-06 06:39:02.349896] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:43.888 [2024-12-06 06:39:02.349967] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.888 [2024-12-06 06:39:02.350006] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:43.888 [2024-12-06 06:39:02.350023] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.888 [2024-12-06 06:39:02.352939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.888 [2024-12-06 06:39:02.352987] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:43.888 pt2 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.888 [2024-12-06 06:39:02.357947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:43.888 [2024-12-06 06:39:02.360406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:43.888 [2024-12-06 06:39:02.360673] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:43.888 [2024-12-06 06:39:02.360709] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:43.888 [2024-12-06 06:39:02.361031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:43.888 [2024-12-06 06:39:02.361267] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:43.888 [2024-12-06 06:39:02.361303] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:43.888 [2024-12-06 06:39:02.361490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.888 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.888 "name": "raid_bdev1", 00:12:43.888 "uuid": "6dde230c-9132-4654-afdf-92da26bf51fc", 00:12:43.888 "strip_size_kb": 0, 00:12:43.888 "state": "online", 00:12:43.888 "raid_level": "raid1", 00:12:43.888 "superblock": true, 00:12:43.888 "num_base_bdevs": 2, 00:12:43.888 "num_base_bdevs_discovered": 2, 00:12:43.888 "num_base_bdevs_operational": 2, 00:12:43.888 "base_bdevs_list": [ 00:12:43.888 { 00:12:43.888 "name": "pt1", 00:12:43.889 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:43.889 "is_configured": true, 00:12:43.889 "data_offset": 2048, 00:12:43.889 "data_size": 63488 00:12:43.889 }, 00:12:43.889 { 00:12:43.889 "name": "pt2", 00:12:43.889 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:43.889 "is_configured": true, 00:12:43.889 "data_offset": 2048, 00:12:43.889 "data_size": 63488 00:12:43.889 } 00:12:43.889 ] 00:12:43.889 }' 00:12:43.889 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.889 06:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.508 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:44.508 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:44.508 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:44.508 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:44.508 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:44.508 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:44.508 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:44.508 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:44.508 06:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.508 06:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.508 [2024-12-06 06:39:02.874416] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:44.509 06:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.509 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:44.509 "name": "raid_bdev1", 00:12:44.509 "aliases": [ 00:12:44.509 "6dde230c-9132-4654-afdf-92da26bf51fc" 00:12:44.509 ], 00:12:44.509 "product_name": "Raid Volume", 00:12:44.509 "block_size": 512, 00:12:44.509 "num_blocks": 63488, 00:12:44.509 "uuid": "6dde230c-9132-4654-afdf-92da26bf51fc", 00:12:44.509 "assigned_rate_limits": { 00:12:44.509 "rw_ios_per_sec": 0, 00:12:44.509 "rw_mbytes_per_sec": 0, 00:12:44.509 "r_mbytes_per_sec": 0, 00:12:44.509 "w_mbytes_per_sec": 0 00:12:44.509 }, 00:12:44.509 "claimed": false, 00:12:44.509 "zoned": false, 00:12:44.509 "supported_io_types": { 00:12:44.509 "read": true, 00:12:44.509 "write": true, 00:12:44.509 "unmap": false, 00:12:44.509 "flush": false, 00:12:44.509 "reset": true, 00:12:44.509 "nvme_admin": false, 00:12:44.509 "nvme_io": false, 00:12:44.509 "nvme_io_md": false, 00:12:44.509 "write_zeroes": true, 00:12:44.509 "zcopy": false, 00:12:44.509 "get_zone_info": false, 00:12:44.509 "zone_management": false, 00:12:44.509 "zone_append": false, 00:12:44.509 "compare": false, 00:12:44.509 "compare_and_write": false, 00:12:44.509 "abort": false, 00:12:44.509 "seek_hole": false, 00:12:44.509 "seek_data": false, 00:12:44.509 "copy": false, 00:12:44.509 "nvme_iov_md": false 00:12:44.509 }, 00:12:44.509 "memory_domains": [ 00:12:44.509 { 00:12:44.509 "dma_device_id": "system", 00:12:44.509 "dma_device_type": 1 00:12:44.509 }, 00:12:44.509 { 00:12:44.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.509 "dma_device_type": 2 00:12:44.509 }, 00:12:44.509 { 00:12:44.509 "dma_device_id": "system", 00:12:44.509 "dma_device_type": 1 00:12:44.509 }, 00:12:44.509 { 00:12:44.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.509 "dma_device_type": 2 00:12:44.509 } 00:12:44.509 ], 00:12:44.509 "driver_specific": { 00:12:44.509 "raid": { 00:12:44.509 "uuid": "6dde230c-9132-4654-afdf-92da26bf51fc", 00:12:44.509 "strip_size_kb": 0, 00:12:44.509 "state": "online", 00:12:44.509 "raid_level": "raid1", 00:12:44.509 "superblock": true, 00:12:44.509 "num_base_bdevs": 2, 00:12:44.509 "num_base_bdevs_discovered": 2, 00:12:44.509 "num_base_bdevs_operational": 2, 00:12:44.509 "base_bdevs_list": [ 00:12:44.509 { 00:12:44.509 "name": "pt1", 00:12:44.509 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:44.509 "is_configured": true, 00:12:44.509 "data_offset": 2048, 00:12:44.509 "data_size": 63488 00:12:44.509 }, 00:12:44.509 { 00:12:44.509 "name": "pt2", 00:12:44.509 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:44.509 "is_configured": true, 00:12:44.509 "data_offset": 2048, 00:12:44.509 "data_size": 63488 00:12:44.509 } 00:12:44.509 ] 00:12:44.509 } 00:12:44.509 } 00:12:44.509 }' 00:12:44.509 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:44.509 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:44.509 pt2' 00:12:44.509 06:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.509 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:44.509 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:44.509 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.509 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:44.509 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.509 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.509 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.509 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:44.509 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:44.509 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:44.509 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.509 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:44.509 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.509 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.509 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.509 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:44.509 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:44.509 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:44.509 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.509 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.509 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:44.509 [2024-12-06 06:39:03.122472] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:44.509 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6dde230c-9132-4654-afdf-92da26bf51fc 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6dde230c-9132-4654-afdf-92da26bf51fc ']' 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.768 [2024-12-06 06:39:03.170088] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:44.768 [2024-12-06 06:39:03.170124] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:44.768 [2024-12-06 06:39:03.170232] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:44.768 [2024-12-06 06:39:03.170330] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:44.768 [2024-12-06 06:39:03.170366] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.768 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.768 [2024-12-06 06:39:03.302150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:44.768 [2024-12-06 06:39:03.304829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:44.768 [2024-12-06 06:39:03.304922] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:44.768 [2024-12-06 06:39:03.304996] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:44.768 [2024-12-06 06:39:03.305024] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:44.768 [2024-12-06 06:39:03.305039] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:44.768 request: 00:12:44.768 { 00:12:44.768 "name": "raid_bdev1", 00:12:44.768 "raid_level": "raid1", 00:12:44.768 "base_bdevs": [ 00:12:44.768 "malloc1", 00:12:44.768 "malloc2" 00:12:44.768 ], 00:12:44.768 "superblock": false, 00:12:44.769 "method": "bdev_raid_create", 00:12:44.769 "req_id": 1 00:12:44.769 } 00:12:44.769 Got JSON-RPC error response 00:12:44.769 response: 00:12:44.769 { 00:12:44.769 "code": -17, 00:12:44.769 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:44.769 } 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.769 [2024-12-06 06:39:03.366158] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:44.769 [2024-12-06 06:39:03.366224] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.769 [2024-12-06 06:39:03.366264] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:44.769 [2024-12-06 06:39:03.366283] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.769 [2024-12-06 06:39:03.369398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.769 [2024-12-06 06:39:03.369449] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:44.769 [2024-12-06 06:39:03.369567] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:44.769 [2024-12-06 06:39:03.369639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:44.769 pt1 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.769 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.029 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.029 "name": "raid_bdev1", 00:12:45.029 "uuid": "6dde230c-9132-4654-afdf-92da26bf51fc", 00:12:45.029 "strip_size_kb": 0, 00:12:45.029 "state": "configuring", 00:12:45.029 "raid_level": "raid1", 00:12:45.029 "superblock": true, 00:12:45.029 "num_base_bdevs": 2, 00:12:45.029 "num_base_bdevs_discovered": 1, 00:12:45.029 "num_base_bdevs_operational": 2, 00:12:45.029 "base_bdevs_list": [ 00:12:45.029 { 00:12:45.029 "name": "pt1", 00:12:45.029 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:45.029 "is_configured": true, 00:12:45.029 "data_offset": 2048, 00:12:45.029 "data_size": 63488 00:12:45.029 }, 00:12:45.029 { 00:12:45.029 "name": null, 00:12:45.029 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:45.029 "is_configured": false, 00:12:45.029 "data_offset": 2048, 00:12:45.029 "data_size": 63488 00:12:45.029 } 00:12:45.029 ] 00:12:45.029 }' 00:12:45.029 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.029 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.289 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:12:45.289 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:45.289 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:45.289 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:45.289 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.289 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.289 [2024-12-06 06:39:03.878339] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:45.289 [2024-12-06 06:39:03.878429] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.289 [2024-12-06 06:39:03.878464] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:45.289 [2024-12-06 06:39:03.878484] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.289 [2024-12-06 06:39:03.879115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.289 [2024-12-06 06:39:03.879165] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:45.289 [2024-12-06 06:39:03.879270] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:45.289 [2024-12-06 06:39:03.879312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:45.289 [2024-12-06 06:39:03.879460] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:45.289 [2024-12-06 06:39:03.879498] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:45.289 [2024-12-06 06:39:03.879864] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:45.289 [2024-12-06 06:39:03.880069] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:45.289 [2024-12-06 06:39:03.880094] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:45.289 [2024-12-06 06:39:03.880282] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.289 pt2 00:12:45.289 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.289 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:45.289 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:45.289 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:45.289 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.289 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.289 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.289 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.289 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:45.289 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.289 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.289 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.289 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.289 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.289 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.289 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.289 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.289 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.289 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.289 "name": "raid_bdev1", 00:12:45.289 "uuid": "6dde230c-9132-4654-afdf-92da26bf51fc", 00:12:45.289 "strip_size_kb": 0, 00:12:45.289 "state": "online", 00:12:45.289 "raid_level": "raid1", 00:12:45.289 "superblock": true, 00:12:45.289 "num_base_bdevs": 2, 00:12:45.289 "num_base_bdevs_discovered": 2, 00:12:45.289 "num_base_bdevs_operational": 2, 00:12:45.289 "base_bdevs_list": [ 00:12:45.289 { 00:12:45.289 "name": "pt1", 00:12:45.289 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:45.289 "is_configured": true, 00:12:45.289 "data_offset": 2048, 00:12:45.289 "data_size": 63488 00:12:45.289 }, 00:12:45.289 { 00:12:45.289 "name": "pt2", 00:12:45.289 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:45.289 "is_configured": true, 00:12:45.289 "data_offset": 2048, 00:12:45.289 "data_size": 63488 00:12:45.289 } 00:12:45.289 ] 00:12:45.289 }' 00:12:45.289 06:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.289 06:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.856 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:45.856 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:45.856 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:45.856 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:45.857 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:45.857 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:45.857 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:45.857 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:45.857 06:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.857 06:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.857 [2024-12-06 06:39:04.386825] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:45.857 06:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.857 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:45.857 "name": "raid_bdev1", 00:12:45.857 "aliases": [ 00:12:45.857 "6dde230c-9132-4654-afdf-92da26bf51fc" 00:12:45.857 ], 00:12:45.857 "product_name": "Raid Volume", 00:12:45.857 "block_size": 512, 00:12:45.857 "num_blocks": 63488, 00:12:45.857 "uuid": "6dde230c-9132-4654-afdf-92da26bf51fc", 00:12:45.857 "assigned_rate_limits": { 00:12:45.857 "rw_ios_per_sec": 0, 00:12:45.857 "rw_mbytes_per_sec": 0, 00:12:45.857 "r_mbytes_per_sec": 0, 00:12:45.857 "w_mbytes_per_sec": 0 00:12:45.857 }, 00:12:45.857 "claimed": false, 00:12:45.857 "zoned": false, 00:12:45.857 "supported_io_types": { 00:12:45.857 "read": true, 00:12:45.857 "write": true, 00:12:45.857 "unmap": false, 00:12:45.857 "flush": false, 00:12:45.857 "reset": true, 00:12:45.857 "nvme_admin": false, 00:12:45.857 "nvme_io": false, 00:12:45.857 "nvme_io_md": false, 00:12:45.857 "write_zeroes": true, 00:12:45.857 "zcopy": false, 00:12:45.857 "get_zone_info": false, 00:12:45.857 "zone_management": false, 00:12:45.857 "zone_append": false, 00:12:45.857 "compare": false, 00:12:45.857 "compare_and_write": false, 00:12:45.857 "abort": false, 00:12:45.857 "seek_hole": false, 00:12:45.857 "seek_data": false, 00:12:45.857 "copy": false, 00:12:45.857 "nvme_iov_md": false 00:12:45.857 }, 00:12:45.857 "memory_domains": [ 00:12:45.857 { 00:12:45.857 "dma_device_id": "system", 00:12:45.857 "dma_device_type": 1 00:12:45.857 }, 00:12:45.857 { 00:12:45.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.857 "dma_device_type": 2 00:12:45.857 }, 00:12:45.857 { 00:12:45.857 "dma_device_id": "system", 00:12:45.857 "dma_device_type": 1 00:12:45.857 }, 00:12:45.857 { 00:12:45.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.857 "dma_device_type": 2 00:12:45.857 } 00:12:45.857 ], 00:12:45.857 "driver_specific": { 00:12:45.857 "raid": { 00:12:45.857 "uuid": "6dde230c-9132-4654-afdf-92da26bf51fc", 00:12:45.857 "strip_size_kb": 0, 00:12:45.857 "state": "online", 00:12:45.857 "raid_level": "raid1", 00:12:45.857 "superblock": true, 00:12:45.857 "num_base_bdevs": 2, 00:12:45.857 "num_base_bdevs_discovered": 2, 00:12:45.857 "num_base_bdevs_operational": 2, 00:12:45.857 "base_bdevs_list": [ 00:12:45.857 { 00:12:45.857 "name": "pt1", 00:12:45.857 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:45.857 "is_configured": true, 00:12:45.857 "data_offset": 2048, 00:12:45.857 "data_size": 63488 00:12:45.857 }, 00:12:45.857 { 00:12:45.857 "name": "pt2", 00:12:45.857 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:45.857 "is_configured": true, 00:12:45.857 "data_offset": 2048, 00:12:45.857 "data_size": 63488 00:12:45.857 } 00:12:45.857 ] 00:12:45.857 } 00:12:45.857 } 00:12:45.857 }' 00:12:45.857 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:45.857 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:45.857 pt2' 00:12:45.857 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.116 [2024-12-06 06:39:04.638829] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6dde230c-9132-4654-afdf-92da26bf51fc '!=' 6dde230c-9132-4654-afdf-92da26bf51fc ']' 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.116 [2024-12-06 06:39:04.690612] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.116 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.116 "name": "raid_bdev1", 00:12:46.116 "uuid": "6dde230c-9132-4654-afdf-92da26bf51fc", 00:12:46.116 "strip_size_kb": 0, 00:12:46.116 "state": "online", 00:12:46.116 "raid_level": "raid1", 00:12:46.116 "superblock": true, 00:12:46.116 "num_base_bdevs": 2, 00:12:46.116 "num_base_bdevs_discovered": 1, 00:12:46.116 "num_base_bdevs_operational": 1, 00:12:46.116 "base_bdevs_list": [ 00:12:46.116 { 00:12:46.116 "name": null, 00:12:46.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.116 "is_configured": false, 00:12:46.116 "data_offset": 0, 00:12:46.116 "data_size": 63488 00:12:46.117 }, 00:12:46.117 { 00:12:46.117 "name": "pt2", 00:12:46.117 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:46.117 "is_configured": true, 00:12:46.117 "data_offset": 2048, 00:12:46.117 "data_size": 63488 00:12:46.117 } 00:12:46.117 ] 00:12:46.117 }' 00:12:46.117 06:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.117 06:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.706 [2024-12-06 06:39:05.226693] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:46.706 [2024-12-06 06:39:05.226732] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:46.706 [2024-12-06 06:39:05.226830] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:46.706 [2024-12-06 06:39:05.226898] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:46.706 [2024-12-06 06:39:05.226919] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.706 [2024-12-06 06:39:05.294687] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:46.706 [2024-12-06 06:39:05.294758] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.706 [2024-12-06 06:39:05.294783] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:46.706 [2024-12-06 06:39:05.294801] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.706 [2024-12-06 06:39:05.297731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.706 [2024-12-06 06:39:05.297783] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:46.706 [2024-12-06 06:39:05.297884] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:46.706 [2024-12-06 06:39:05.297948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:46.706 [2024-12-06 06:39:05.298075] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:46.706 [2024-12-06 06:39:05.298098] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:46.706 [2024-12-06 06:39:05.298389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:46.706 [2024-12-06 06:39:05.298616] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:46.706 [2024-12-06 06:39:05.298642] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:46.706 [2024-12-06 06:39:05.298867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.706 pt2 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.706 06:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.964 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.964 "name": "raid_bdev1", 00:12:46.964 "uuid": "6dde230c-9132-4654-afdf-92da26bf51fc", 00:12:46.964 "strip_size_kb": 0, 00:12:46.964 "state": "online", 00:12:46.964 "raid_level": "raid1", 00:12:46.964 "superblock": true, 00:12:46.964 "num_base_bdevs": 2, 00:12:46.964 "num_base_bdevs_discovered": 1, 00:12:46.964 "num_base_bdevs_operational": 1, 00:12:46.964 "base_bdevs_list": [ 00:12:46.964 { 00:12:46.964 "name": null, 00:12:46.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.964 "is_configured": false, 00:12:46.964 "data_offset": 2048, 00:12:46.964 "data_size": 63488 00:12:46.964 }, 00:12:46.964 { 00:12:46.964 "name": "pt2", 00:12:46.964 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:46.964 "is_configured": true, 00:12:46.964 "data_offset": 2048, 00:12:46.964 "data_size": 63488 00:12:46.964 } 00:12:46.964 ] 00:12:46.964 }' 00:12:46.964 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.964 06:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.223 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:47.223 06:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.223 06:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.223 [2024-12-06 06:39:05.830920] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:47.223 [2024-12-06 06:39:05.830963] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:47.223 [2024-12-06 06:39:05.831059] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:47.223 [2024-12-06 06:39:05.831131] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:47.223 [2024-12-06 06:39:05.831149] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:47.223 06:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.223 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:47.223 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.223 06:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.223 06:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.223 06:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.482 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:47.482 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:47.482 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:12:47.482 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:47.482 06:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.482 06:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.482 [2024-12-06 06:39:05.898979] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:47.482 [2024-12-06 06:39:05.899057] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.482 [2024-12-06 06:39:05.899089] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:12:47.482 [2024-12-06 06:39:05.899105] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.482 [2024-12-06 06:39:05.902074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.482 [2024-12-06 06:39:05.902123] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:47.482 [2024-12-06 06:39:05.902251] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:47.482 [2024-12-06 06:39:05.902309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:47.482 [2024-12-06 06:39:05.902492] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:47.482 [2024-12-06 06:39:05.902546] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:47.482 [2024-12-06 06:39:05.902574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:47.482 [2024-12-06 06:39:05.902641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:47.482 [2024-12-06 06:39:05.902748] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:47.482 [2024-12-06 06:39:05.902764] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:47.482 [2024-12-06 06:39:05.903079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:47.482 [2024-12-06 06:39:05.903291] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:47.482 [2024-12-06 06:39:05.903323] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:47.482 [2024-12-06 06:39:05.903581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.482 pt1 00:12:47.482 06:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.482 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:12:47.482 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:47.482 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.482 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.482 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.482 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.482 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:47.482 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.482 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.482 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.482 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.482 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.482 06:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.482 06:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.482 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.482 06:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.482 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.482 "name": "raid_bdev1", 00:12:47.482 "uuid": "6dde230c-9132-4654-afdf-92da26bf51fc", 00:12:47.482 "strip_size_kb": 0, 00:12:47.482 "state": "online", 00:12:47.482 "raid_level": "raid1", 00:12:47.482 "superblock": true, 00:12:47.482 "num_base_bdevs": 2, 00:12:47.482 "num_base_bdevs_discovered": 1, 00:12:47.482 "num_base_bdevs_operational": 1, 00:12:47.482 "base_bdevs_list": [ 00:12:47.482 { 00:12:47.482 "name": null, 00:12:47.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.482 "is_configured": false, 00:12:47.482 "data_offset": 2048, 00:12:47.482 "data_size": 63488 00:12:47.482 }, 00:12:47.482 { 00:12:47.482 "name": "pt2", 00:12:47.482 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:47.482 "is_configured": true, 00:12:47.482 "data_offset": 2048, 00:12:47.482 "data_size": 63488 00:12:47.482 } 00:12:47.482 ] 00:12:47.482 }' 00:12:47.482 06:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.482 06:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.088 06:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:48.088 06:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:48.088 06:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.088 06:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.088 06:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.088 06:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:48.088 06:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:48.088 06:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.088 06:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.088 06:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:48.088 [2024-12-06 06:39:06.471961] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:48.088 06:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.088 06:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 6dde230c-9132-4654-afdf-92da26bf51fc '!=' 6dde230c-9132-4654-afdf-92da26bf51fc ']' 00:12:48.088 06:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63373 00:12:48.088 06:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63373 ']' 00:12:48.088 06:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63373 00:12:48.088 06:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:48.088 06:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:48.088 06:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63373 00:12:48.088 06:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:48.088 06:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:48.088 killing process with pid 63373 00:12:48.088 06:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63373' 00:12:48.088 06:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63373 00:12:48.088 [2024-12-06 06:39:06.554179] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:48.088 06:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63373 00:12:48.088 [2024-12-06 06:39:06.554307] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:48.088 [2024-12-06 06:39:06.554382] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:48.088 [2024-12-06 06:39:06.554405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:48.391 [2024-12-06 06:39:06.743632] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:49.327 06:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:49.327 00:12:49.327 real 0m6.635s 00:12:49.327 user 0m10.516s 00:12:49.327 sys 0m0.927s 00:12:49.327 06:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.327 06:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.327 ************************************ 00:12:49.327 END TEST raid_superblock_test 00:12:49.327 ************************************ 00:12:49.327 06:39:07 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:12:49.327 06:39:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:49.327 06:39:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.327 06:39:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:49.327 ************************************ 00:12:49.327 START TEST raid_read_error_test 00:12:49.327 ************************************ 00:12:49.327 06:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:12:49.327 06:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:49.327 06:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:12:49.327 06:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:49.327 06:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:49.327 06:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:49.327 06:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:49.327 06:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:49.327 06:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:49.327 06:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:49.327 06:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:49.327 06:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:49.327 06:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:49.327 06:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:49.327 06:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:49.327 06:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:49.327 06:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:49.327 06:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:49.327 06:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:49.328 06:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:49.328 06:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:49.328 06:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:49.328 06:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.JMSRGJo1iA 00:12:49.328 06:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63708 00:12:49.328 06:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63708 00:12:49.328 06:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:49.328 06:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63708 ']' 00:12:49.328 06:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.328 06:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:49.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.328 06:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.328 06:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:49.328 06:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.587 [2024-12-06 06:39:07.980323] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:12:49.587 [2024-12-06 06:39:07.980485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63708 ] 00:12:49.587 [2024-12-06 06:39:08.157094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.846 [2024-12-06 06:39:08.295913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.105 [2024-12-06 06:39:08.504930] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:50.105 [2024-12-06 06:39:08.505014] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:50.365 06:39:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:50.365 06:39:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:50.365 06:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:50.365 06:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:50.365 06:39:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.365 06:39:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.624 BaseBdev1_malloc 00:12:50.624 06:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.624 06:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:50.624 06:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.624 06:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.624 true 00:12:50.624 06:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.624 06:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:50.624 06:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.624 06:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.624 [2024-12-06 06:39:09.051810] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:50.624 [2024-12-06 06:39:09.051914] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.624 [2024-12-06 06:39:09.051945] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:50.624 [2024-12-06 06:39:09.051964] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.624 [2024-12-06 06:39:09.054803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.624 [2024-12-06 06:39:09.054858] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:50.624 BaseBdev1 00:12:50.624 06:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.624 06:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:50.624 06:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:50.624 06:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.624 06:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.624 BaseBdev2_malloc 00:12:50.624 06:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.624 06:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:50.624 06:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.624 06:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.624 true 00:12:50.624 06:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.624 06:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:50.624 06:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.624 06:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.624 [2024-12-06 06:39:09.116267] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:50.624 [2024-12-06 06:39:09.116481] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.624 [2024-12-06 06:39:09.116583] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:50.624 [2024-12-06 06:39:09.116816] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.624 [2024-12-06 06:39:09.119717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.624 [2024-12-06 06:39:09.119886] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:50.624 BaseBdev2 00:12:50.624 06:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.624 06:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:12:50.624 06:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.624 06:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.624 [2024-12-06 06:39:09.124552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:50.624 [2024-12-06 06:39:09.127154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:50.624 [2024-12-06 06:39:09.127589] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:50.624 [2024-12-06 06:39:09.127734] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:50.624 [2024-12-06 06:39:09.128090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:50.624 [2024-12-06 06:39:09.128490] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:50.624 [2024-12-06 06:39:09.128515] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:50.624 [2024-12-06 06:39:09.128777] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.625 06:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.625 06:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:50.625 06:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.625 06:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.625 06:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.625 06:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.625 06:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:50.625 06:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.625 06:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.625 06:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.625 06:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.625 06:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.625 06:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.625 06:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.625 06:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.625 06:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.625 06:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.625 "name": "raid_bdev1", 00:12:50.625 "uuid": "6333b80d-a7a1-45e9-8ac2-b021391bd38e", 00:12:50.625 "strip_size_kb": 0, 00:12:50.625 "state": "online", 00:12:50.625 "raid_level": "raid1", 00:12:50.625 "superblock": true, 00:12:50.625 "num_base_bdevs": 2, 00:12:50.625 "num_base_bdevs_discovered": 2, 00:12:50.625 "num_base_bdevs_operational": 2, 00:12:50.625 "base_bdevs_list": [ 00:12:50.625 { 00:12:50.625 "name": "BaseBdev1", 00:12:50.625 "uuid": "57291205-ce90-5508-8e79-bc7d60013c2f", 00:12:50.625 "is_configured": true, 00:12:50.625 "data_offset": 2048, 00:12:50.625 "data_size": 63488 00:12:50.625 }, 00:12:50.625 { 00:12:50.625 "name": "BaseBdev2", 00:12:50.625 "uuid": "84b91fa8-1811-5fc5-a6e0-09a41ab0bde0", 00:12:50.625 "is_configured": true, 00:12:50.625 "data_offset": 2048, 00:12:50.625 "data_size": 63488 00:12:50.625 } 00:12:50.625 ] 00:12:50.625 }' 00:12:50.625 06:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.625 06:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.192 06:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:51.192 06:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:51.192 [2024-12-06 06:39:09.802429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:52.129 06:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:52.129 06:39:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.129 06:39:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.129 06:39:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.129 06:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:52.129 06:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:52.129 06:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:52.129 06:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:12:52.129 06:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:52.129 06:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.129 06:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.129 06:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.129 06:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.129 06:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:52.129 06:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.129 06:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.129 06:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.129 06:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.129 06:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.129 06:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.129 06:39:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.129 06:39:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.129 06:39:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.129 06:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.129 "name": "raid_bdev1", 00:12:52.129 "uuid": "6333b80d-a7a1-45e9-8ac2-b021391bd38e", 00:12:52.129 "strip_size_kb": 0, 00:12:52.129 "state": "online", 00:12:52.129 "raid_level": "raid1", 00:12:52.129 "superblock": true, 00:12:52.129 "num_base_bdevs": 2, 00:12:52.129 "num_base_bdevs_discovered": 2, 00:12:52.129 "num_base_bdevs_operational": 2, 00:12:52.129 "base_bdevs_list": [ 00:12:52.129 { 00:12:52.129 "name": "BaseBdev1", 00:12:52.129 "uuid": "57291205-ce90-5508-8e79-bc7d60013c2f", 00:12:52.129 "is_configured": true, 00:12:52.129 "data_offset": 2048, 00:12:52.129 "data_size": 63488 00:12:52.129 }, 00:12:52.129 { 00:12:52.129 "name": "BaseBdev2", 00:12:52.129 "uuid": "84b91fa8-1811-5fc5-a6e0-09a41ab0bde0", 00:12:52.129 "is_configured": true, 00:12:52.129 "data_offset": 2048, 00:12:52.129 "data_size": 63488 00:12:52.129 } 00:12:52.129 ] 00:12:52.129 }' 00:12:52.129 06:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.129 06:39:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.698 06:39:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:52.698 06:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.698 06:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.698 [2024-12-06 06:39:11.197273] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:52.698 [2024-12-06 06:39:11.197328] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:52.698 [2024-12-06 06:39:11.201115] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:52.698 [2024-12-06 06:39:11.201367] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.698 [2024-12-06 06:39:11.201637] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:52.698 [2024-12-06 06:39:11.201805] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:52.698 { 00:12:52.698 "results": [ 00:12:52.698 { 00:12:52.698 "job": "raid_bdev1", 00:12:52.698 "core_mask": "0x1", 00:12:52.698 "workload": "randrw", 00:12:52.698 "percentage": 50, 00:12:52.698 "status": "finished", 00:12:52.698 "queue_depth": 1, 00:12:52.698 "io_size": 131072, 00:12:52.698 "runtime": 1.3923, 00:12:52.698 "iops": 11621.776915894563, 00:12:52.698 "mibps": 1452.7221144868204, 00:12:52.698 "io_failed": 0, 00:12:52.698 "io_timeout": 0, 00:12:52.698 "avg_latency_us": 81.62667101145563, 00:12:52.698 "min_latency_us": 44.916363636363634, 00:12:52.698 "max_latency_us": 1861.8181818181818 00:12:52.698 } 00:12:52.698 ], 00:12:52.698 "core_count": 1 00:12:52.698 } 00:12:52.698 06:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.698 06:39:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63708 00:12:52.698 06:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63708 ']' 00:12:52.698 06:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63708 00:12:52.698 06:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:52.698 06:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:52.698 06:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63708 00:12:52.698 killing process with pid 63708 00:12:52.698 06:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:52.698 06:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:52.698 06:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63708' 00:12:52.698 06:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63708 00:12:52.698 [2024-12-06 06:39:11.243747] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:52.698 06:39:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63708 00:12:52.956 [2024-12-06 06:39:11.370324] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:53.893 06:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.JMSRGJo1iA 00:12:53.893 06:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:53.893 06:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:53.893 ************************************ 00:12:53.893 END TEST raid_read_error_test 00:12:53.893 ************************************ 00:12:53.893 06:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:53.893 06:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:53.893 06:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:53.893 06:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:53.893 06:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:53.893 00:12:53.893 real 0m4.640s 00:12:53.893 user 0m5.836s 00:12:53.893 sys 0m0.563s 00:12:53.893 06:39:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.893 06:39:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.158 06:39:12 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:12:54.158 06:39:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:54.158 06:39:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:54.158 06:39:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:54.158 ************************************ 00:12:54.158 START TEST raid_write_error_test 00:12:54.158 ************************************ 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1v6GZ7C6Qm 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63858 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63858 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63858 ']' 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:54.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:54.158 06:39:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.158 [2024-12-06 06:39:12.669247] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:12:54.158 [2024-12-06 06:39:12.669399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63858 ] 00:12:54.421 [2024-12-06 06:39:12.840280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.421 [2024-12-06 06:39:12.971677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.679 [2024-12-06 06:39:13.180217] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.679 [2024-12-06 06:39:13.180266] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:55.245 06:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:55.245 06:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:55.245 06:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:55.245 06:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:55.245 06:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.245 06:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.245 BaseBdev1_malloc 00:12:55.245 06:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.245 06:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:55.245 06:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.245 06:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.245 true 00:12:55.245 06:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.245 06:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:55.245 06:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.245 06:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.245 [2024-12-06 06:39:13.748383] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:55.245 [2024-12-06 06:39:13.748606] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.246 [2024-12-06 06:39:13.748684] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:55.246 [2024-12-06 06:39:13.748909] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.246 [2024-12-06 06:39:13.751825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.246 [2024-12-06 06:39:13.751875] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:55.246 BaseBdev1 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.246 BaseBdev2_malloc 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.246 true 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.246 [2024-12-06 06:39:13.805066] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:55.246 [2024-12-06 06:39:13.805277] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.246 [2024-12-06 06:39:13.805411] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:55.246 [2024-12-06 06:39:13.805443] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.246 [2024-12-06 06:39:13.808238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.246 BaseBdev2 00:12:55.246 [2024-12-06 06:39:13.808419] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.246 [2024-12-06 06:39:13.813280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:55.246 [2024-12-06 06:39:13.815998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:55.246 [2024-12-06 06:39:13.816419] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:55.246 [2024-12-06 06:39:13.816583] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:55.246 [2024-12-06 06:39:13.816964] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:55.246 [2024-12-06 06:39:13.817360] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:55.246 [2024-12-06 06:39:13.817491] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:55.246 [2024-12-06 06:39:13.817898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.246 "name": "raid_bdev1", 00:12:55.246 "uuid": "b02492a7-7aec-40a0-92d2-0189597efed5", 00:12:55.246 "strip_size_kb": 0, 00:12:55.246 "state": "online", 00:12:55.246 "raid_level": "raid1", 00:12:55.246 "superblock": true, 00:12:55.246 "num_base_bdevs": 2, 00:12:55.246 "num_base_bdevs_discovered": 2, 00:12:55.246 "num_base_bdevs_operational": 2, 00:12:55.246 "base_bdevs_list": [ 00:12:55.246 { 00:12:55.246 "name": "BaseBdev1", 00:12:55.246 "uuid": "fc94dec1-3ab1-59c1-bb1f-e70390815360", 00:12:55.246 "is_configured": true, 00:12:55.246 "data_offset": 2048, 00:12:55.246 "data_size": 63488 00:12:55.246 }, 00:12:55.246 { 00:12:55.246 "name": "BaseBdev2", 00:12:55.246 "uuid": "a66c1e94-e19a-5487-8537-cef07b3fe9c2", 00:12:55.246 "is_configured": true, 00:12:55.246 "data_offset": 2048, 00:12:55.246 "data_size": 63488 00:12:55.246 } 00:12:55.246 ] 00:12:55.246 }' 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.246 06:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.812 06:39:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:55.812 06:39:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:55.812 [2024-12-06 06:39:14.443420] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:56.748 06:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:56.748 06:39:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.748 06:39:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.748 [2024-12-06 06:39:15.341969] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:56.748 [2024-12-06 06:39:15.342199] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:56.748 [2024-12-06 06:39:15.342454] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:12:56.748 06:39:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.748 06:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:56.748 06:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:56.748 06:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:56.748 06:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:12:56.748 06:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:56.748 06:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.748 06:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.748 06:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.748 06:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.748 06:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:56.748 06:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.748 06:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.748 06:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.748 06:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.748 06:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.748 06:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.748 06:39:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.748 06:39:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.748 06:39:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.011 06:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.011 "name": "raid_bdev1", 00:12:57.011 "uuid": "b02492a7-7aec-40a0-92d2-0189597efed5", 00:12:57.011 "strip_size_kb": 0, 00:12:57.011 "state": "online", 00:12:57.011 "raid_level": "raid1", 00:12:57.011 "superblock": true, 00:12:57.011 "num_base_bdevs": 2, 00:12:57.011 "num_base_bdevs_discovered": 1, 00:12:57.011 "num_base_bdevs_operational": 1, 00:12:57.011 "base_bdevs_list": [ 00:12:57.011 { 00:12:57.011 "name": null, 00:12:57.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.011 "is_configured": false, 00:12:57.011 "data_offset": 0, 00:12:57.011 "data_size": 63488 00:12:57.011 }, 00:12:57.011 { 00:12:57.011 "name": "BaseBdev2", 00:12:57.011 "uuid": "a66c1e94-e19a-5487-8537-cef07b3fe9c2", 00:12:57.011 "is_configured": true, 00:12:57.011 "data_offset": 2048, 00:12:57.011 "data_size": 63488 00:12:57.011 } 00:12:57.011 ] 00:12:57.011 }' 00:12:57.011 06:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.011 06:39:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.269 06:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:57.269 06:39:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.269 06:39:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.269 [2024-12-06 06:39:15.881273] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:57.269 [2024-12-06 06:39:15.881310] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:57.269 [2024-12-06 06:39:15.884812] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:57.269 [2024-12-06 06:39:15.884891] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.269 [2024-12-06 06:39:15.885013] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:57.269 [2024-12-06 06:39:15.885032] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:57.269 { 00:12:57.269 "results": [ 00:12:57.269 { 00:12:57.269 "job": "raid_bdev1", 00:12:57.269 "core_mask": "0x1", 00:12:57.269 "workload": "randrw", 00:12:57.269 "percentage": 50, 00:12:57.269 "status": "finished", 00:12:57.269 "queue_depth": 1, 00:12:57.269 "io_size": 131072, 00:12:57.269 "runtime": 1.435165, 00:12:57.269 "iops": 12837.548295840548, 00:12:57.269 "mibps": 1604.6935369800685, 00:12:57.269 "io_failed": 0, 00:12:57.269 "io_timeout": 0, 00:12:57.269 "avg_latency_us": 73.2367046934828, 00:12:57.269 "min_latency_us": 41.42545454545454, 00:12:57.269 "max_latency_us": 1995.8690909090908 00:12:57.269 } 00:12:57.269 ], 00:12:57.269 "core_count": 1 00:12:57.269 } 00:12:57.269 06:39:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.269 06:39:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63858 00:12:57.269 06:39:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63858 ']' 00:12:57.269 06:39:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63858 00:12:57.269 06:39:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:57.269 06:39:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:57.269 06:39:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63858 00:12:57.528 killing process with pid 63858 00:12:57.528 06:39:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:57.528 06:39:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:57.528 06:39:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63858' 00:12:57.528 06:39:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63858 00:12:57.528 06:39:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63858 00:12:57.528 [2024-12-06 06:39:15.922064] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:57.528 [2024-12-06 06:39:16.049317] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:58.904 06:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1v6GZ7C6Qm 00:12:58.905 06:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:58.905 06:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:58.905 06:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:58.905 06:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:58.905 06:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:58.905 06:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:58.905 06:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:58.905 00:12:58.905 real 0m4.614s 00:12:58.905 user 0m5.794s 00:12:58.905 sys 0m0.539s 00:12:58.905 06:39:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.905 ************************************ 00:12:58.905 END TEST raid_write_error_test 00:12:58.905 ************************************ 00:12:58.905 06:39:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.905 06:39:17 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:12:58.905 06:39:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:58.905 06:39:17 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:12:58.905 06:39:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:58.905 06:39:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.905 06:39:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:58.905 ************************************ 00:12:58.905 START TEST raid_state_function_test 00:12:58.905 ************************************ 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:58.905 Process raid pid: 63996 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63996 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63996' 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63996 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63996 ']' 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:58.905 06:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.905 [2024-12-06 06:39:17.350581] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:12:58.905 [2024-12-06 06:39:17.351037] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.905 [2024-12-06 06:39:17.532649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.164 [2024-12-06 06:39:17.664185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.423 [2024-12-06 06:39:17.878814] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.423 [2024-12-06 06:39:17.879101] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.989 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.989 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:59.989 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:59.989 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.989 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.989 [2024-12-06 06:39:18.369382] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:59.989 [2024-12-06 06:39:18.369651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:59.990 [2024-12-06 06:39:18.369681] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:59.990 [2024-12-06 06:39:18.369701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:59.990 [2024-12-06 06:39:18.369712] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:59.990 [2024-12-06 06:39:18.369727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:59.990 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.990 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:59.990 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.990 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.990 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:59.990 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.990 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:59.990 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.990 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.990 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.990 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.990 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.990 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.990 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.990 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.990 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.990 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.990 "name": "Existed_Raid", 00:12:59.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.990 "strip_size_kb": 64, 00:12:59.990 "state": "configuring", 00:12:59.990 "raid_level": "raid0", 00:12:59.990 "superblock": false, 00:12:59.990 "num_base_bdevs": 3, 00:12:59.990 "num_base_bdevs_discovered": 0, 00:12:59.990 "num_base_bdevs_operational": 3, 00:12:59.990 "base_bdevs_list": [ 00:12:59.990 { 00:12:59.990 "name": "BaseBdev1", 00:12:59.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.990 "is_configured": false, 00:12:59.990 "data_offset": 0, 00:12:59.990 "data_size": 0 00:12:59.990 }, 00:12:59.990 { 00:12:59.990 "name": "BaseBdev2", 00:12:59.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.990 "is_configured": false, 00:12:59.990 "data_offset": 0, 00:12:59.990 "data_size": 0 00:12:59.990 }, 00:12:59.990 { 00:12:59.990 "name": "BaseBdev3", 00:12:59.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.990 "is_configured": false, 00:12:59.990 "data_offset": 0, 00:12:59.990 "data_size": 0 00:12:59.990 } 00:12:59.990 ] 00:12:59.990 }' 00:12:59.990 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.990 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.557 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:00.557 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.557 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.557 [2024-12-06 06:39:18.905474] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:00.557 [2024-12-06 06:39:18.905520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:00.557 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.557 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:00.557 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.557 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.557 [2024-12-06 06:39:18.917489] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:00.557 [2024-12-06 06:39:18.917706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:00.557 [2024-12-06 06:39:18.917832] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:00.557 [2024-12-06 06:39:18.917868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:00.557 [2024-12-06 06:39:18.917881] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:00.557 [2024-12-06 06:39:18.917895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:00.557 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.557 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:00.557 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.557 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.557 [2024-12-06 06:39:18.963725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:00.557 BaseBdev1 00:13:00.557 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.557 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:00.557 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:00.557 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:00.557 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:00.557 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:00.557 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:00.557 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:00.557 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.557 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.557 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.557 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:00.557 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.557 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.557 [ 00:13:00.557 { 00:13:00.557 "name": "BaseBdev1", 00:13:00.557 "aliases": [ 00:13:00.557 "0745d0af-57ec-4e34-b877-b1d76fd71d73" 00:13:00.557 ], 00:13:00.557 "product_name": "Malloc disk", 00:13:00.557 "block_size": 512, 00:13:00.557 "num_blocks": 65536, 00:13:00.558 "uuid": "0745d0af-57ec-4e34-b877-b1d76fd71d73", 00:13:00.558 "assigned_rate_limits": { 00:13:00.558 "rw_ios_per_sec": 0, 00:13:00.558 "rw_mbytes_per_sec": 0, 00:13:00.558 "r_mbytes_per_sec": 0, 00:13:00.558 "w_mbytes_per_sec": 0 00:13:00.558 }, 00:13:00.558 "claimed": true, 00:13:00.558 "claim_type": "exclusive_write", 00:13:00.558 "zoned": false, 00:13:00.558 "supported_io_types": { 00:13:00.558 "read": true, 00:13:00.558 "write": true, 00:13:00.558 "unmap": true, 00:13:00.558 "flush": true, 00:13:00.558 "reset": true, 00:13:00.558 "nvme_admin": false, 00:13:00.558 "nvme_io": false, 00:13:00.558 "nvme_io_md": false, 00:13:00.558 "write_zeroes": true, 00:13:00.558 "zcopy": true, 00:13:00.558 "get_zone_info": false, 00:13:00.558 "zone_management": false, 00:13:00.558 "zone_append": false, 00:13:00.558 "compare": false, 00:13:00.558 "compare_and_write": false, 00:13:00.558 "abort": true, 00:13:00.558 "seek_hole": false, 00:13:00.558 "seek_data": false, 00:13:00.558 "copy": true, 00:13:00.558 "nvme_iov_md": false 00:13:00.558 }, 00:13:00.558 "memory_domains": [ 00:13:00.558 { 00:13:00.558 "dma_device_id": "system", 00:13:00.558 "dma_device_type": 1 00:13:00.558 }, 00:13:00.558 { 00:13:00.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.558 "dma_device_type": 2 00:13:00.558 } 00:13:00.558 ], 00:13:00.558 "driver_specific": {} 00:13:00.558 } 00:13:00.558 ] 00:13:00.558 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.558 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:00.558 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:00.558 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.558 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.558 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:00.558 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.558 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.558 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.558 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.558 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.558 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.558 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.558 06:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.558 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.558 06:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.558 06:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.558 06:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.558 "name": "Existed_Raid", 00:13:00.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.558 "strip_size_kb": 64, 00:13:00.558 "state": "configuring", 00:13:00.558 "raid_level": "raid0", 00:13:00.558 "superblock": false, 00:13:00.558 "num_base_bdevs": 3, 00:13:00.558 "num_base_bdevs_discovered": 1, 00:13:00.558 "num_base_bdevs_operational": 3, 00:13:00.558 "base_bdevs_list": [ 00:13:00.558 { 00:13:00.558 "name": "BaseBdev1", 00:13:00.558 "uuid": "0745d0af-57ec-4e34-b877-b1d76fd71d73", 00:13:00.558 "is_configured": true, 00:13:00.558 "data_offset": 0, 00:13:00.558 "data_size": 65536 00:13:00.558 }, 00:13:00.558 { 00:13:00.558 "name": "BaseBdev2", 00:13:00.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.558 "is_configured": false, 00:13:00.558 "data_offset": 0, 00:13:00.558 "data_size": 0 00:13:00.558 }, 00:13:00.558 { 00:13:00.558 "name": "BaseBdev3", 00:13:00.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.558 "is_configured": false, 00:13:00.558 "data_offset": 0, 00:13:00.558 "data_size": 0 00:13:00.558 } 00:13:00.558 ] 00:13:00.558 }' 00:13:00.558 06:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.558 06:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.125 06:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:01.125 06:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.125 06:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.125 [2024-12-06 06:39:19.519978] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:01.125 [2024-12-06 06:39:19.520056] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:01.125 06:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.125 06:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:01.125 06:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.125 06:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.125 [2024-12-06 06:39:19.528032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:01.125 [2024-12-06 06:39:19.530829] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:01.125 [2024-12-06 06:39:19.530891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:01.125 [2024-12-06 06:39:19.530908] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:01.125 [2024-12-06 06:39:19.530939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:01.125 06:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.125 06:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:01.125 06:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:01.125 06:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:01.125 06:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.125 06:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.125 06:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:01.125 06:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.125 06:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:01.125 06:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.125 06:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.125 06:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.125 06:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.125 06:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.125 06:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.125 06:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.125 06:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.125 06:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.125 06:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.125 "name": "Existed_Raid", 00:13:01.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.125 "strip_size_kb": 64, 00:13:01.125 "state": "configuring", 00:13:01.125 "raid_level": "raid0", 00:13:01.125 "superblock": false, 00:13:01.125 "num_base_bdevs": 3, 00:13:01.125 "num_base_bdevs_discovered": 1, 00:13:01.125 "num_base_bdevs_operational": 3, 00:13:01.125 "base_bdevs_list": [ 00:13:01.125 { 00:13:01.125 "name": "BaseBdev1", 00:13:01.125 "uuid": "0745d0af-57ec-4e34-b877-b1d76fd71d73", 00:13:01.125 "is_configured": true, 00:13:01.125 "data_offset": 0, 00:13:01.125 "data_size": 65536 00:13:01.125 }, 00:13:01.125 { 00:13:01.125 "name": "BaseBdev2", 00:13:01.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.125 "is_configured": false, 00:13:01.125 "data_offset": 0, 00:13:01.125 "data_size": 0 00:13:01.125 }, 00:13:01.125 { 00:13:01.125 "name": "BaseBdev3", 00:13:01.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.125 "is_configured": false, 00:13:01.125 "data_offset": 0, 00:13:01.125 "data_size": 0 00:13:01.125 } 00:13:01.125 ] 00:13:01.125 }' 00:13:01.125 06:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.125 06:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.692 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:01.692 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.692 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.692 [2024-12-06 06:39:20.088604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:01.692 BaseBdev2 00:13:01.692 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.692 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:01.692 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:01.692 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:01.692 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:01.692 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:01.693 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:01.693 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:01.693 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.693 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.693 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.693 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:01.693 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.693 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.693 [ 00:13:01.693 { 00:13:01.693 "name": "BaseBdev2", 00:13:01.693 "aliases": [ 00:13:01.693 "b4fd1e5b-aa95-4042-98af-5606f3eaaad8" 00:13:01.693 ], 00:13:01.693 "product_name": "Malloc disk", 00:13:01.693 "block_size": 512, 00:13:01.693 "num_blocks": 65536, 00:13:01.693 "uuid": "b4fd1e5b-aa95-4042-98af-5606f3eaaad8", 00:13:01.693 "assigned_rate_limits": { 00:13:01.693 "rw_ios_per_sec": 0, 00:13:01.693 "rw_mbytes_per_sec": 0, 00:13:01.693 "r_mbytes_per_sec": 0, 00:13:01.693 "w_mbytes_per_sec": 0 00:13:01.693 }, 00:13:01.693 "claimed": true, 00:13:01.693 "claim_type": "exclusive_write", 00:13:01.693 "zoned": false, 00:13:01.693 "supported_io_types": { 00:13:01.693 "read": true, 00:13:01.693 "write": true, 00:13:01.693 "unmap": true, 00:13:01.693 "flush": true, 00:13:01.693 "reset": true, 00:13:01.693 "nvme_admin": false, 00:13:01.693 "nvme_io": false, 00:13:01.693 "nvme_io_md": false, 00:13:01.693 "write_zeroes": true, 00:13:01.693 "zcopy": true, 00:13:01.693 "get_zone_info": false, 00:13:01.693 "zone_management": false, 00:13:01.693 "zone_append": false, 00:13:01.693 "compare": false, 00:13:01.693 "compare_and_write": false, 00:13:01.693 "abort": true, 00:13:01.693 "seek_hole": false, 00:13:01.693 "seek_data": false, 00:13:01.693 "copy": true, 00:13:01.693 "nvme_iov_md": false 00:13:01.693 }, 00:13:01.693 "memory_domains": [ 00:13:01.693 { 00:13:01.693 "dma_device_id": "system", 00:13:01.693 "dma_device_type": 1 00:13:01.693 }, 00:13:01.693 { 00:13:01.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.693 "dma_device_type": 2 00:13:01.693 } 00:13:01.693 ], 00:13:01.693 "driver_specific": {} 00:13:01.693 } 00:13:01.693 ] 00:13:01.693 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.693 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:01.693 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:01.693 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:01.693 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:01.693 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.693 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.693 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:01.693 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.693 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:01.693 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.693 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.693 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.693 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.693 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.693 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.693 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.693 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.693 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.693 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.693 "name": "Existed_Raid", 00:13:01.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.693 "strip_size_kb": 64, 00:13:01.693 "state": "configuring", 00:13:01.693 "raid_level": "raid0", 00:13:01.693 "superblock": false, 00:13:01.693 "num_base_bdevs": 3, 00:13:01.693 "num_base_bdevs_discovered": 2, 00:13:01.693 "num_base_bdevs_operational": 3, 00:13:01.693 "base_bdevs_list": [ 00:13:01.693 { 00:13:01.693 "name": "BaseBdev1", 00:13:01.693 "uuid": "0745d0af-57ec-4e34-b877-b1d76fd71d73", 00:13:01.693 "is_configured": true, 00:13:01.693 "data_offset": 0, 00:13:01.693 "data_size": 65536 00:13:01.693 }, 00:13:01.693 { 00:13:01.693 "name": "BaseBdev2", 00:13:01.693 "uuid": "b4fd1e5b-aa95-4042-98af-5606f3eaaad8", 00:13:01.693 "is_configured": true, 00:13:01.693 "data_offset": 0, 00:13:01.693 "data_size": 65536 00:13:01.693 }, 00:13:01.693 { 00:13:01.693 "name": "BaseBdev3", 00:13:01.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.693 "is_configured": false, 00:13:01.693 "data_offset": 0, 00:13:01.693 "data_size": 0 00:13:01.693 } 00:13:01.693 ] 00:13:01.693 }' 00:13:01.693 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.693 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.262 [2024-12-06 06:39:20.736303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:02.262 [2024-12-06 06:39:20.736540] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:02.262 [2024-12-06 06:39:20.736578] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:02.262 [2024-12-06 06:39:20.736932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:02.262 [2024-12-06 06:39:20.737202] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:02.262 [2024-12-06 06:39:20.737221] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:02.262 BaseBdev3 00:13:02.262 [2024-12-06 06:39:20.737573] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.262 [ 00:13:02.262 { 00:13:02.262 "name": "BaseBdev3", 00:13:02.262 "aliases": [ 00:13:02.262 "a576b469-e1f2-4f2a-bf45-f1ae62f9d05f" 00:13:02.262 ], 00:13:02.262 "product_name": "Malloc disk", 00:13:02.262 "block_size": 512, 00:13:02.262 "num_blocks": 65536, 00:13:02.262 "uuid": "a576b469-e1f2-4f2a-bf45-f1ae62f9d05f", 00:13:02.262 "assigned_rate_limits": { 00:13:02.262 "rw_ios_per_sec": 0, 00:13:02.262 "rw_mbytes_per_sec": 0, 00:13:02.262 "r_mbytes_per_sec": 0, 00:13:02.262 "w_mbytes_per_sec": 0 00:13:02.262 }, 00:13:02.262 "claimed": true, 00:13:02.262 "claim_type": "exclusive_write", 00:13:02.262 "zoned": false, 00:13:02.262 "supported_io_types": { 00:13:02.262 "read": true, 00:13:02.262 "write": true, 00:13:02.262 "unmap": true, 00:13:02.262 "flush": true, 00:13:02.262 "reset": true, 00:13:02.262 "nvme_admin": false, 00:13:02.262 "nvme_io": false, 00:13:02.262 "nvme_io_md": false, 00:13:02.262 "write_zeroes": true, 00:13:02.262 "zcopy": true, 00:13:02.262 "get_zone_info": false, 00:13:02.262 "zone_management": false, 00:13:02.262 "zone_append": false, 00:13:02.262 "compare": false, 00:13:02.262 "compare_and_write": false, 00:13:02.262 "abort": true, 00:13:02.262 "seek_hole": false, 00:13:02.262 "seek_data": false, 00:13:02.262 "copy": true, 00:13:02.262 "nvme_iov_md": false 00:13:02.262 }, 00:13:02.262 "memory_domains": [ 00:13:02.262 { 00:13:02.262 "dma_device_id": "system", 00:13:02.262 "dma_device_type": 1 00:13:02.262 }, 00:13:02.262 { 00:13:02.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.262 "dma_device_type": 2 00:13:02.262 } 00:13:02.262 ], 00:13:02.262 "driver_specific": {} 00:13:02.262 } 00:13:02.262 ] 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.262 "name": "Existed_Raid", 00:13:02.262 "uuid": "488c0861-75b0-4c31-8806-8c3c27849ceb", 00:13:02.262 "strip_size_kb": 64, 00:13:02.262 "state": "online", 00:13:02.262 "raid_level": "raid0", 00:13:02.262 "superblock": false, 00:13:02.262 "num_base_bdevs": 3, 00:13:02.262 "num_base_bdevs_discovered": 3, 00:13:02.262 "num_base_bdevs_operational": 3, 00:13:02.262 "base_bdevs_list": [ 00:13:02.262 { 00:13:02.262 "name": "BaseBdev1", 00:13:02.262 "uuid": "0745d0af-57ec-4e34-b877-b1d76fd71d73", 00:13:02.262 "is_configured": true, 00:13:02.262 "data_offset": 0, 00:13:02.262 "data_size": 65536 00:13:02.262 }, 00:13:02.262 { 00:13:02.262 "name": "BaseBdev2", 00:13:02.262 "uuid": "b4fd1e5b-aa95-4042-98af-5606f3eaaad8", 00:13:02.262 "is_configured": true, 00:13:02.262 "data_offset": 0, 00:13:02.262 "data_size": 65536 00:13:02.262 }, 00:13:02.262 { 00:13:02.262 "name": "BaseBdev3", 00:13:02.262 "uuid": "a576b469-e1f2-4f2a-bf45-f1ae62f9d05f", 00:13:02.262 "is_configured": true, 00:13:02.262 "data_offset": 0, 00:13:02.262 "data_size": 65536 00:13:02.262 } 00:13:02.262 ] 00:13:02.262 }' 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.262 06:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.830 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:02.830 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:02.830 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:02.830 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:02.830 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:02.830 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:02.830 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:02.830 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:02.830 06:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.830 06:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.830 [2024-12-06 06:39:21.300976] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:02.830 06:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.830 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:02.830 "name": "Existed_Raid", 00:13:02.830 "aliases": [ 00:13:02.830 "488c0861-75b0-4c31-8806-8c3c27849ceb" 00:13:02.830 ], 00:13:02.830 "product_name": "Raid Volume", 00:13:02.830 "block_size": 512, 00:13:02.830 "num_blocks": 196608, 00:13:02.830 "uuid": "488c0861-75b0-4c31-8806-8c3c27849ceb", 00:13:02.830 "assigned_rate_limits": { 00:13:02.830 "rw_ios_per_sec": 0, 00:13:02.830 "rw_mbytes_per_sec": 0, 00:13:02.830 "r_mbytes_per_sec": 0, 00:13:02.830 "w_mbytes_per_sec": 0 00:13:02.830 }, 00:13:02.830 "claimed": false, 00:13:02.830 "zoned": false, 00:13:02.830 "supported_io_types": { 00:13:02.830 "read": true, 00:13:02.830 "write": true, 00:13:02.830 "unmap": true, 00:13:02.830 "flush": true, 00:13:02.830 "reset": true, 00:13:02.830 "nvme_admin": false, 00:13:02.830 "nvme_io": false, 00:13:02.830 "nvme_io_md": false, 00:13:02.830 "write_zeroes": true, 00:13:02.830 "zcopy": false, 00:13:02.830 "get_zone_info": false, 00:13:02.830 "zone_management": false, 00:13:02.830 "zone_append": false, 00:13:02.830 "compare": false, 00:13:02.830 "compare_and_write": false, 00:13:02.830 "abort": false, 00:13:02.830 "seek_hole": false, 00:13:02.830 "seek_data": false, 00:13:02.830 "copy": false, 00:13:02.830 "nvme_iov_md": false 00:13:02.830 }, 00:13:02.830 "memory_domains": [ 00:13:02.830 { 00:13:02.830 "dma_device_id": "system", 00:13:02.830 "dma_device_type": 1 00:13:02.830 }, 00:13:02.830 { 00:13:02.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.830 "dma_device_type": 2 00:13:02.830 }, 00:13:02.830 { 00:13:02.830 "dma_device_id": "system", 00:13:02.830 "dma_device_type": 1 00:13:02.830 }, 00:13:02.830 { 00:13:02.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.830 "dma_device_type": 2 00:13:02.830 }, 00:13:02.830 { 00:13:02.830 "dma_device_id": "system", 00:13:02.830 "dma_device_type": 1 00:13:02.830 }, 00:13:02.830 { 00:13:02.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.830 "dma_device_type": 2 00:13:02.830 } 00:13:02.830 ], 00:13:02.830 "driver_specific": { 00:13:02.830 "raid": { 00:13:02.830 "uuid": "488c0861-75b0-4c31-8806-8c3c27849ceb", 00:13:02.830 "strip_size_kb": 64, 00:13:02.830 "state": "online", 00:13:02.830 "raid_level": "raid0", 00:13:02.830 "superblock": false, 00:13:02.830 "num_base_bdevs": 3, 00:13:02.830 "num_base_bdevs_discovered": 3, 00:13:02.830 "num_base_bdevs_operational": 3, 00:13:02.830 "base_bdevs_list": [ 00:13:02.830 { 00:13:02.830 "name": "BaseBdev1", 00:13:02.830 "uuid": "0745d0af-57ec-4e34-b877-b1d76fd71d73", 00:13:02.830 "is_configured": true, 00:13:02.830 "data_offset": 0, 00:13:02.830 "data_size": 65536 00:13:02.830 }, 00:13:02.830 { 00:13:02.830 "name": "BaseBdev2", 00:13:02.830 "uuid": "b4fd1e5b-aa95-4042-98af-5606f3eaaad8", 00:13:02.830 "is_configured": true, 00:13:02.830 "data_offset": 0, 00:13:02.830 "data_size": 65536 00:13:02.830 }, 00:13:02.830 { 00:13:02.830 "name": "BaseBdev3", 00:13:02.830 "uuid": "a576b469-e1f2-4f2a-bf45-f1ae62f9d05f", 00:13:02.830 "is_configured": true, 00:13:02.830 "data_offset": 0, 00:13:02.830 "data_size": 65536 00:13:02.830 } 00:13:02.830 ] 00:13:02.830 } 00:13:02.830 } 00:13:02.830 }' 00:13:02.830 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:02.830 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:02.830 BaseBdev2 00:13:02.830 BaseBdev3' 00:13:02.830 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:02.830 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:02.830 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:02.830 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:02.830 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:02.830 06:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.830 06:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.090 [2024-12-06 06:39:21.620761] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:03.090 [2024-12-06 06:39:21.620942] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:03.090 [2024-12-06 06:39:21.621128] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.090 06:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.350 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.350 "name": "Existed_Raid", 00:13:03.350 "uuid": "488c0861-75b0-4c31-8806-8c3c27849ceb", 00:13:03.350 "strip_size_kb": 64, 00:13:03.350 "state": "offline", 00:13:03.350 "raid_level": "raid0", 00:13:03.350 "superblock": false, 00:13:03.350 "num_base_bdevs": 3, 00:13:03.350 "num_base_bdevs_discovered": 2, 00:13:03.350 "num_base_bdevs_operational": 2, 00:13:03.350 "base_bdevs_list": [ 00:13:03.350 { 00:13:03.350 "name": null, 00:13:03.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.350 "is_configured": false, 00:13:03.350 "data_offset": 0, 00:13:03.350 "data_size": 65536 00:13:03.350 }, 00:13:03.350 { 00:13:03.350 "name": "BaseBdev2", 00:13:03.350 "uuid": "b4fd1e5b-aa95-4042-98af-5606f3eaaad8", 00:13:03.350 "is_configured": true, 00:13:03.350 "data_offset": 0, 00:13:03.350 "data_size": 65536 00:13:03.350 }, 00:13:03.350 { 00:13:03.350 "name": "BaseBdev3", 00:13:03.350 "uuid": "a576b469-e1f2-4f2a-bf45-f1ae62f9d05f", 00:13:03.350 "is_configured": true, 00:13:03.350 "data_offset": 0, 00:13:03.350 "data_size": 65536 00:13:03.350 } 00:13:03.350 ] 00:13:03.350 }' 00:13:03.350 06:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.350 06:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.609 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:03.609 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:03.609 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.609 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.609 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.609 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:03.609 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.868 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:03.868 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:03.868 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:03.868 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.868 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.868 [2024-12-06 06:39:22.293603] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:03.868 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.868 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:03.868 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:03.868 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.868 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:03.868 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.868 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.868 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.868 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:03.868 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:03.868 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:03.868 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.868 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.868 [2024-12-06 06:39:22.461420] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:03.868 [2024-12-06 06:39:22.462516] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.128 BaseBdev2 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.128 [ 00:13:04.128 { 00:13:04.128 "name": "BaseBdev2", 00:13:04.128 "aliases": [ 00:13:04.128 "fffa3d0d-9afb-45da-88bb-05533f4e9ea5" 00:13:04.128 ], 00:13:04.128 "product_name": "Malloc disk", 00:13:04.128 "block_size": 512, 00:13:04.128 "num_blocks": 65536, 00:13:04.128 "uuid": "fffa3d0d-9afb-45da-88bb-05533f4e9ea5", 00:13:04.128 "assigned_rate_limits": { 00:13:04.128 "rw_ios_per_sec": 0, 00:13:04.128 "rw_mbytes_per_sec": 0, 00:13:04.128 "r_mbytes_per_sec": 0, 00:13:04.128 "w_mbytes_per_sec": 0 00:13:04.128 }, 00:13:04.128 "claimed": false, 00:13:04.128 "zoned": false, 00:13:04.128 "supported_io_types": { 00:13:04.128 "read": true, 00:13:04.128 "write": true, 00:13:04.128 "unmap": true, 00:13:04.128 "flush": true, 00:13:04.128 "reset": true, 00:13:04.128 "nvme_admin": false, 00:13:04.128 "nvme_io": false, 00:13:04.128 "nvme_io_md": false, 00:13:04.128 "write_zeroes": true, 00:13:04.128 "zcopy": true, 00:13:04.128 "get_zone_info": false, 00:13:04.128 "zone_management": false, 00:13:04.128 "zone_append": false, 00:13:04.128 "compare": false, 00:13:04.128 "compare_and_write": false, 00:13:04.128 "abort": true, 00:13:04.128 "seek_hole": false, 00:13:04.128 "seek_data": false, 00:13:04.128 "copy": true, 00:13:04.128 "nvme_iov_md": false 00:13:04.128 }, 00:13:04.128 "memory_domains": [ 00:13:04.128 { 00:13:04.128 "dma_device_id": "system", 00:13:04.128 "dma_device_type": 1 00:13:04.128 }, 00:13:04.128 { 00:13:04.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.128 "dma_device_type": 2 00:13:04.128 } 00:13:04.128 ], 00:13:04.128 "driver_specific": {} 00:13:04.128 } 00:13:04.128 ] 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.128 BaseBdev3 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.128 [ 00:13:04.128 { 00:13:04.128 "name": "BaseBdev3", 00:13:04.128 "aliases": [ 00:13:04.128 "88d9746e-cf40-47ba-9a32-a879e822c196" 00:13:04.128 ], 00:13:04.128 "product_name": "Malloc disk", 00:13:04.128 "block_size": 512, 00:13:04.128 "num_blocks": 65536, 00:13:04.128 "uuid": "88d9746e-cf40-47ba-9a32-a879e822c196", 00:13:04.128 "assigned_rate_limits": { 00:13:04.128 "rw_ios_per_sec": 0, 00:13:04.128 "rw_mbytes_per_sec": 0, 00:13:04.128 "r_mbytes_per_sec": 0, 00:13:04.128 "w_mbytes_per_sec": 0 00:13:04.128 }, 00:13:04.128 "claimed": false, 00:13:04.128 "zoned": false, 00:13:04.128 "supported_io_types": { 00:13:04.128 "read": true, 00:13:04.128 "write": true, 00:13:04.128 "unmap": true, 00:13:04.128 "flush": true, 00:13:04.128 "reset": true, 00:13:04.128 "nvme_admin": false, 00:13:04.128 "nvme_io": false, 00:13:04.128 "nvme_io_md": false, 00:13:04.128 "write_zeroes": true, 00:13:04.128 "zcopy": true, 00:13:04.128 "get_zone_info": false, 00:13:04.128 "zone_management": false, 00:13:04.128 "zone_append": false, 00:13:04.128 "compare": false, 00:13:04.128 "compare_and_write": false, 00:13:04.128 "abort": true, 00:13:04.128 "seek_hole": false, 00:13:04.128 "seek_data": false, 00:13:04.128 "copy": true, 00:13:04.128 "nvme_iov_md": false 00:13:04.128 }, 00:13:04.128 "memory_domains": [ 00:13:04.128 { 00:13:04.128 "dma_device_id": "system", 00:13:04.128 "dma_device_type": 1 00:13:04.128 }, 00:13:04.128 { 00:13:04.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.128 "dma_device_type": 2 00:13:04.128 } 00:13:04.128 ], 00:13:04.128 "driver_specific": {} 00:13:04.128 } 00:13:04.128 ] 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.128 [2024-12-06 06:39:22.760098] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:04.128 [2024-12-06 06:39:22.760153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:04.128 [2024-12-06 06:39:22.760202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:04.128 [2024-12-06 06:39:22.762644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:04.128 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:04.129 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:04.129 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:04.129 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.129 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:04.129 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.129 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.129 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.129 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.129 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.129 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.129 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.129 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.388 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.388 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.388 "name": "Existed_Raid", 00:13:04.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.388 "strip_size_kb": 64, 00:13:04.388 "state": "configuring", 00:13:04.388 "raid_level": "raid0", 00:13:04.388 "superblock": false, 00:13:04.388 "num_base_bdevs": 3, 00:13:04.388 "num_base_bdevs_discovered": 2, 00:13:04.388 "num_base_bdevs_operational": 3, 00:13:04.388 "base_bdevs_list": [ 00:13:04.388 { 00:13:04.388 "name": "BaseBdev1", 00:13:04.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.388 "is_configured": false, 00:13:04.388 "data_offset": 0, 00:13:04.388 "data_size": 0 00:13:04.388 }, 00:13:04.388 { 00:13:04.388 "name": "BaseBdev2", 00:13:04.388 "uuid": "fffa3d0d-9afb-45da-88bb-05533f4e9ea5", 00:13:04.388 "is_configured": true, 00:13:04.388 "data_offset": 0, 00:13:04.388 "data_size": 65536 00:13:04.388 }, 00:13:04.388 { 00:13:04.388 "name": "BaseBdev3", 00:13:04.388 "uuid": "88d9746e-cf40-47ba-9a32-a879e822c196", 00:13:04.388 "is_configured": true, 00:13:04.388 "data_offset": 0, 00:13:04.388 "data_size": 65536 00:13:04.388 } 00:13:04.388 ] 00:13:04.388 }' 00:13:04.388 06:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.388 06:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.647 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:04.647 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.647 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.647 [2024-12-06 06:39:23.284340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:04.647 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.647 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:04.647 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:04.647 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:04.647 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:04.647 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.647 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:04.647 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.647 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.647 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.647 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.905 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.905 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.905 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.905 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.905 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.905 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.905 "name": "Existed_Raid", 00:13:04.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.905 "strip_size_kb": 64, 00:13:04.905 "state": "configuring", 00:13:04.905 "raid_level": "raid0", 00:13:04.905 "superblock": false, 00:13:04.905 "num_base_bdevs": 3, 00:13:04.905 "num_base_bdevs_discovered": 1, 00:13:04.905 "num_base_bdevs_operational": 3, 00:13:04.905 "base_bdevs_list": [ 00:13:04.905 { 00:13:04.905 "name": "BaseBdev1", 00:13:04.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.905 "is_configured": false, 00:13:04.905 "data_offset": 0, 00:13:04.905 "data_size": 0 00:13:04.905 }, 00:13:04.905 { 00:13:04.905 "name": null, 00:13:04.905 "uuid": "fffa3d0d-9afb-45da-88bb-05533f4e9ea5", 00:13:04.905 "is_configured": false, 00:13:04.905 "data_offset": 0, 00:13:04.905 "data_size": 65536 00:13:04.905 }, 00:13:04.905 { 00:13:04.905 "name": "BaseBdev3", 00:13:04.905 "uuid": "88d9746e-cf40-47ba-9a32-a879e822c196", 00:13:04.905 "is_configured": true, 00:13:04.905 "data_offset": 0, 00:13:04.905 "data_size": 65536 00:13:04.905 } 00:13:04.905 ] 00:13:04.905 }' 00:13:04.905 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.905 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.164 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:05.165 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.165 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.165 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.165 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.423 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:05.423 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:05.423 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.423 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.423 [2024-12-06 06:39:23.883885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:05.423 BaseBdev1 00:13:05.423 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.423 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:05.423 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:05.423 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:05.423 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:05.423 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:05.423 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:05.423 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:05.423 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.423 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.423 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.423 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:05.423 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.423 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.423 [ 00:13:05.423 { 00:13:05.423 "name": "BaseBdev1", 00:13:05.423 "aliases": [ 00:13:05.423 "20e88dda-9ca0-421b-a6e4-87289757c893" 00:13:05.423 ], 00:13:05.423 "product_name": "Malloc disk", 00:13:05.423 "block_size": 512, 00:13:05.423 "num_blocks": 65536, 00:13:05.423 "uuid": "20e88dda-9ca0-421b-a6e4-87289757c893", 00:13:05.423 "assigned_rate_limits": { 00:13:05.423 "rw_ios_per_sec": 0, 00:13:05.423 "rw_mbytes_per_sec": 0, 00:13:05.423 "r_mbytes_per_sec": 0, 00:13:05.423 "w_mbytes_per_sec": 0 00:13:05.423 }, 00:13:05.423 "claimed": true, 00:13:05.423 "claim_type": "exclusive_write", 00:13:05.423 "zoned": false, 00:13:05.423 "supported_io_types": { 00:13:05.423 "read": true, 00:13:05.423 "write": true, 00:13:05.423 "unmap": true, 00:13:05.423 "flush": true, 00:13:05.423 "reset": true, 00:13:05.424 "nvme_admin": false, 00:13:05.424 "nvme_io": false, 00:13:05.424 "nvme_io_md": false, 00:13:05.424 "write_zeroes": true, 00:13:05.424 "zcopy": true, 00:13:05.424 "get_zone_info": false, 00:13:05.424 "zone_management": false, 00:13:05.424 "zone_append": false, 00:13:05.424 "compare": false, 00:13:05.424 "compare_and_write": false, 00:13:05.424 "abort": true, 00:13:05.424 "seek_hole": false, 00:13:05.424 "seek_data": false, 00:13:05.424 "copy": true, 00:13:05.424 "nvme_iov_md": false 00:13:05.424 }, 00:13:05.424 "memory_domains": [ 00:13:05.424 { 00:13:05.424 "dma_device_id": "system", 00:13:05.424 "dma_device_type": 1 00:13:05.424 }, 00:13:05.424 { 00:13:05.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.424 "dma_device_type": 2 00:13:05.424 } 00:13:05.424 ], 00:13:05.424 "driver_specific": {} 00:13:05.424 } 00:13:05.424 ] 00:13:05.424 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.424 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:05.424 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:05.424 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:05.424 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:05.424 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:05.424 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.424 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:05.424 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.424 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.424 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.424 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.424 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.424 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.424 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.424 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.424 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.424 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.424 "name": "Existed_Raid", 00:13:05.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.424 "strip_size_kb": 64, 00:13:05.424 "state": "configuring", 00:13:05.424 "raid_level": "raid0", 00:13:05.424 "superblock": false, 00:13:05.424 "num_base_bdevs": 3, 00:13:05.424 "num_base_bdevs_discovered": 2, 00:13:05.424 "num_base_bdevs_operational": 3, 00:13:05.424 "base_bdevs_list": [ 00:13:05.424 { 00:13:05.424 "name": "BaseBdev1", 00:13:05.424 "uuid": "20e88dda-9ca0-421b-a6e4-87289757c893", 00:13:05.424 "is_configured": true, 00:13:05.424 "data_offset": 0, 00:13:05.424 "data_size": 65536 00:13:05.424 }, 00:13:05.424 { 00:13:05.424 "name": null, 00:13:05.424 "uuid": "fffa3d0d-9afb-45da-88bb-05533f4e9ea5", 00:13:05.424 "is_configured": false, 00:13:05.424 "data_offset": 0, 00:13:05.424 "data_size": 65536 00:13:05.424 }, 00:13:05.424 { 00:13:05.424 "name": "BaseBdev3", 00:13:05.424 "uuid": "88d9746e-cf40-47ba-9a32-a879e822c196", 00:13:05.424 "is_configured": true, 00:13:05.424 "data_offset": 0, 00:13:05.424 "data_size": 65536 00:13:05.424 } 00:13:05.424 ] 00:13:05.424 }' 00:13:05.424 06:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.424 06:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.992 06:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:05.992 06:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.992 06:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.992 06:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.992 06:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.992 06:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:05.992 06:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:05.992 06:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.992 06:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.992 [2024-12-06 06:39:24.472241] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:05.992 06:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.992 06:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:05.992 06:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:05.992 06:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:05.992 06:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:05.992 06:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.992 06:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:05.992 06:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.992 06:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.992 06:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.992 06:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.992 06:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.992 06:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.992 06:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.992 06:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.992 06:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.992 06:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.992 "name": "Existed_Raid", 00:13:05.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.992 "strip_size_kb": 64, 00:13:05.992 "state": "configuring", 00:13:05.992 "raid_level": "raid0", 00:13:05.992 "superblock": false, 00:13:05.992 "num_base_bdevs": 3, 00:13:05.992 "num_base_bdevs_discovered": 1, 00:13:05.992 "num_base_bdevs_operational": 3, 00:13:05.992 "base_bdevs_list": [ 00:13:05.992 { 00:13:05.992 "name": "BaseBdev1", 00:13:05.992 "uuid": "20e88dda-9ca0-421b-a6e4-87289757c893", 00:13:05.992 "is_configured": true, 00:13:05.992 "data_offset": 0, 00:13:05.992 "data_size": 65536 00:13:05.992 }, 00:13:05.992 { 00:13:05.992 "name": null, 00:13:05.992 "uuid": "fffa3d0d-9afb-45da-88bb-05533f4e9ea5", 00:13:05.992 "is_configured": false, 00:13:05.992 "data_offset": 0, 00:13:05.992 "data_size": 65536 00:13:05.992 }, 00:13:05.992 { 00:13:05.992 "name": null, 00:13:05.992 "uuid": "88d9746e-cf40-47ba-9a32-a879e822c196", 00:13:05.992 "is_configured": false, 00:13:05.992 "data_offset": 0, 00:13:05.992 "data_size": 65536 00:13:05.992 } 00:13:05.992 ] 00:13:05.992 }' 00:13:05.992 06:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.992 06:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.573 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.573 06:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.573 06:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.573 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:06.573 06:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.574 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:06.574 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:06.574 06:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.574 06:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.574 [2024-12-06 06:39:25.064508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:06.574 06:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.574 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:06.574 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.574 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.574 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:06.574 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.574 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:06.574 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.574 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.574 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.574 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.574 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.574 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.574 06:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.574 06:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.574 06:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.574 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.574 "name": "Existed_Raid", 00:13:06.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.574 "strip_size_kb": 64, 00:13:06.574 "state": "configuring", 00:13:06.574 "raid_level": "raid0", 00:13:06.574 "superblock": false, 00:13:06.574 "num_base_bdevs": 3, 00:13:06.574 "num_base_bdevs_discovered": 2, 00:13:06.574 "num_base_bdevs_operational": 3, 00:13:06.574 "base_bdevs_list": [ 00:13:06.574 { 00:13:06.574 "name": "BaseBdev1", 00:13:06.574 "uuid": "20e88dda-9ca0-421b-a6e4-87289757c893", 00:13:06.574 "is_configured": true, 00:13:06.574 "data_offset": 0, 00:13:06.574 "data_size": 65536 00:13:06.574 }, 00:13:06.574 { 00:13:06.574 "name": null, 00:13:06.574 "uuid": "fffa3d0d-9afb-45da-88bb-05533f4e9ea5", 00:13:06.574 "is_configured": false, 00:13:06.574 "data_offset": 0, 00:13:06.574 "data_size": 65536 00:13:06.574 }, 00:13:06.574 { 00:13:06.574 "name": "BaseBdev3", 00:13:06.574 "uuid": "88d9746e-cf40-47ba-9a32-a879e822c196", 00:13:06.574 "is_configured": true, 00:13:06.574 "data_offset": 0, 00:13:06.574 "data_size": 65536 00:13:06.574 } 00:13:06.574 ] 00:13:06.574 }' 00:13:06.574 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.574 06:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.140 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.140 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:07.140 06:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.140 06:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.140 06:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.140 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:07.140 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:07.140 06:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.140 06:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.140 [2024-12-06 06:39:25.648680] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:07.140 06:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.140 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:07.140 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.140 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:07.140 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:07.140 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.140 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:07.140 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.140 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.140 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.140 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.140 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.140 06:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.140 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.140 06:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.140 06:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.398 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.398 "name": "Existed_Raid", 00:13:07.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.398 "strip_size_kb": 64, 00:13:07.398 "state": "configuring", 00:13:07.398 "raid_level": "raid0", 00:13:07.398 "superblock": false, 00:13:07.398 "num_base_bdevs": 3, 00:13:07.398 "num_base_bdevs_discovered": 1, 00:13:07.398 "num_base_bdevs_operational": 3, 00:13:07.398 "base_bdevs_list": [ 00:13:07.398 { 00:13:07.398 "name": null, 00:13:07.398 "uuid": "20e88dda-9ca0-421b-a6e4-87289757c893", 00:13:07.398 "is_configured": false, 00:13:07.398 "data_offset": 0, 00:13:07.398 "data_size": 65536 00:13:07.398 }, 00:13:07.398 { 00:13:07.398 "name": null, 00:13:07.398 "uuid": "fffa3d0d-9afb-45da-88bb-05533f4e9ea5", 00:13:07.398 "is_configured": false, 00:13:07.398 "data_offset": 0, 00:13:07.398 "data_size": 65536 00:13:07.398 }, 00:13:07.398 { 00:13:07.398 "name": "BaseBdev3", 00:13:07.398 "uuid": "88d9746e-cf40-47ba-9a32-a879e822c196", 00:13:07.398 "is_configured": true, 00:13:07.398 "data_offset": 0, 00:13:07.398 "data_size": 65536 00:13:07.398 } 00:13:07.398 ] 00:13:07.398 }' 00:13:07.398 06:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.399 06:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.657 06:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.657 06:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:07.657 06:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.657 06:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.657 06:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.916 06:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:07.916 06:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:07.916 06:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.916 06:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.916 [2024-12-06 06:39:26.325498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:07.916 06:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.916 06:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:07.916 06:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.916 06:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:07.916 06:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:07.916 06:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.916 06:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:07.916 06:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.916 06:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.916 06:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.916 06:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.916 06:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.916 06:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.916 06:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.916 06:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.916 06:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.916 06:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.916 "name": "Existed_Raid", 00:13:07.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.916 "strip_size_kb": 64, 00:13:07.916 "state": "configuring", 00:13:07.916 "raid_level": "raid0", 00:13:07.916 "superblock": false, 00:13:07.916 "num_base_bdevs": 3, 00:13:07.916 "num_base_bdevs_discovered": 2, 00:13:07.916 "num_base_bdevs_operational": 3, 00:13:07.916 "base_bdevs_list": [ 00:13:07.916 { 00:13:07.916 "name": null, 00:13:07.916 "uuid": "20e88dda-9ca0-421b-a6e4-87289757c893", 00:13:07.916 "is_configured": false, 00:13:07.916 "data_offset": 0, 00:13:07.916 "data_size": 65536 00:13:07.916 }, 00:13:07.916 { 00:13:07.916 "name": "BaseBdev2", 00:13:07.916 "uuid": "fffa3d0d-9afb-45da-88bb-05533f4e9ea5", 00:13:07.916 "is_configured": true, 00:13:07.916 "data_offset": 0, 00:13:07.916 "data_size": 65536 00:13:07.916 }, 00:13:07.916 { 00:13:07.916 "name": "BaseBdev3", 00:13:07.916 "uuid": "88d9746e-cf40-47ba-9a32-a879e822c196", 00:13:07.916 "is_configured": true, 00:13:07.916 "data_offset": 0, 00:13:07.916 "data_size": 65536 00:13:07.916 } 00:13:07.916 ] 00:13:07.916 }' 00:13:07.916 06:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.916 06:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.485 06:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.485 06:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:08.485 06:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.485 06:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.485 06:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.485 06:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:08.485 06:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.485 06:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:08.485 06:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.485 06:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.485 06:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.485 06:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 20e88dda-9ca0-421b-a6e4-87289757c893 00:13:08.485 06:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.485 06:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.485 [2024-12-06 06:39:26.988815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:08.485 [2024-12-06 06:39:26.989179] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:08.485 [2024-12-06 06:39:26.989213] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:08.485 [2024-12-06 06:39:26.989567] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:08.485 [2024-12-06 06:39:26.989774] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:08.485 [2024-12-06 06:39:26.989791] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:08.485 NewBaseBdev 00:13:08.485 [2024-12-06 06:39:26.990104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.485 06:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.485 06:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:08.485 06:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:08.485 06:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:08.485 06:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:08.485 06:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:08.485 06:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:08.485 06:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:08.485 06:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.485 06:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.485 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.485 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:08.485 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.485 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.485 [ 00:13:08.485 { 00:13:08.485 "name": "NewBaseBdev", 00:13:08.485 "aliases": [ 00:13:08.485 "20e88dda-9ca0-421b-a6e4-87289757c893" 00:13:08.485 ], 00:13:08.485 "product_name": "Malloc disk", 00:13:08.485 "block_size": 512, 00:13:08.485 "num_blocks": 65536, 00:13:08.485 "uuid": "20e88dda-9ca0-421b-a6e4-87289757c893", 00:13:08.485 "assigned_rate_limits": { 00:13:08.485 "rw_ios_per_sec": 0, 00:13:08.485 "rw_mbytes_per_sec": 0, 00:13:08.485 "r_mbytes_per_sec": 0, 00:13:08.485 "w_mbytes_per_sec": 0 00:13:08.485 }, 00:13:08.485 "claimed": true, 00:13:08.485 "claim_type": "exclusive_write", 00:13:08.485 "zoned": false, 00:13:08.485 "supported_io_types": { 00:13:08.485 "read": true, 00:13:08.485 "write": true, 00:13:08.485 "unmap": true, 00:13:08.485 "flush": true, 00:13:08.485 "reset": true, 00:13:08.485 "nvme_admin": false, 00:13:08.485 "nvme_io": false, 00:13:08.485 "nvme_io_md": false, 00:13:08.485 "write_zeroes": true, 00:13:08.485 "zcopy": true, 00:13:08.485 "get_zone_info": false, 00:13:08.485 "zone_management": false, 00:13:08.485 "zone_append": false, 00:13:08.485 "compare": false, 00:13:08.485 "compare_and_write": false, 00:13:08.485 "abort": true, 00:13:08.485 "seek_hole": false, 00:13:08.485 "seek_data": false, 00:13:08.485 "copy": true, 00:13:08.485 "nvme_iov_md": false 00:13:08.485 }, 00:13:08.485 "memory_domains": [ 00:13:08.485 { 00:13:08.485 "dma_device_id": "system", 00:13:08.485 "dma_device_type": 1 00:13:08.485 }, 00:13:08.485 { 00:13:08.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.485 "dma_device_type": 2 00:13:08.485 } 00:13:08.485 ], 00:13:08.485 "driver_specific": {} 00:13:08.485 } 00:13:08.485 ] 00:13:08.485 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.485 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:08.485 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:13:08.485 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:08.485 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.485 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:08.485 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:08.485 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:08.485 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.485 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.485 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.485 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.485 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.485 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.485 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.485 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.485 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.485 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.485 "name": "Existed_Raid", 00:13:08.485 "uuid": "2bb916df-a6d2-4203-a89f-661194490184", 00:13:08.485 "strip_size_kb": 64, 00:13:08.485 "state": "online", 00:13:08.485 "raid_level": "raid0", 00:13:08.485 "superblock": false, 00:13:08.485 "num_base_bdevs": 3, 00:13:08.485 "num_base_bdevs_discovered": 3, 00:13:08.485 "num_base_bdevs_operational": 3, 00:13:08.485 "base_bdevs_list": [ 00:13:08.485 { 00:13:08.485 "name": "NewBaseBdev", 00:13:08.485 "uuid": "20e88dda-9ca0-421b-a6e4-87289757c893", 00:13:08.485 "is_configured": true, 00:13:08.485 "data_offset": 0, 00:13:08.485 "data_size": 65536 00:13:08.485 }, 00:13:08.485 { 00:13:08.485 "name": "BaseBdev2", 00:13:08.485 "uuid": "fffa3d0d-9afb-45da-88bb-05533f4e9ea5", 00:13:08.485 "is_configured": true, 00:13:08.485 "data_offset": 0, 00:13:08.485 "data_size": 65536 00:13:08.485 }, 00:13:08.485 { 00:13:08.485 "name": "BaseBdev3", 00:13:08.485 "uuid": "88d9746e-cf40-47ba-9a32-a879e822c196", 00:13:08.485 "is_configured": true, 00:13:08.485 "data_offset": 0, 00:13:08.485 "data_size": 65536 00:13:08.485 } 00:13:08.485 ] 00:13:08.485 }' 00:13:08.485 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.485 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.054 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:09.054 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:09.054 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:09.054 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:09.054 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:09.054 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:09.054 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:09.054 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:09.054 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.054 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.054 [2024-12-06 06:39:27.533433] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:09.054 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.054 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:09.054 "name": "Existed_Raid", 00:13:09.054 "aliases": [ 00:13:09.054 "2bb916df-a6d2-4203-a89f-661194490184" 00:13:09.054 ], 00:13:09.054 "product_name": "Raid Volume", 00:13:09.054 "block_size": 512, 00:13:09.054 "num_blocks": 196608, 00:13:09.054 "uuid": "2bb916df-a6d2-4203-a89f-661194490184", 00:13:09.055 "assigned_rate_limits": { 00:13:09.055 "rw_ios_per_sec": 0, 00:13:09.055 "rw_mbytes_per_sec": 0, 00:13:09.055 "r_mbytes_per_sec": 0, 00:13:09.055 "w_mbytes_per_sec": 0 00:13:09.055 }, 00:13:09.055 "claimed": false, 00:13:09.055 "zoned": false, 00:13:09.055 "supported_io_types": { 00:13:09.055 "read": true, 00:13:09.055 "write": true, 00:13:09.055 "unmap": true, 00:13:09.055 "flush": true, 00:13:09.055 "reset": true, 00:13:09.055 "nvme_admin": false, 00:13:09.055 "nvme_io": false, 00:13:09.055 "nvme_io_md": false, 00:13:09.055 "write_zeroes": true, 00:13:09.055 "zcopy": false, 00:13:09.055 "get_zone_info": false, 00:13:09.055 "zone_management": false, 00:13:09.055 "zone_append": false, 00:13:09.055 "compare": false, 00:13:09.055 "compare_and_write": false, 00:13:09.055 "abort": false, 00:13:09.055 "seek_hole": false, 00:13:09.055 "seek_data": false, 00:13:09.055 "copy": false, 00:13:09.055 "nvme_iov_md": false 00:13:09.055 }, 00:13:09.055 "memory_domains": [ 00:13:09.055 { 00:13:09.055 "dma_device_id": "system", 00:13:09.055 "dma_device_type": 1 00:13:09.055 }, 00:13:09.055 { 00:13:09.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.055 "dma_device_type": 2 00:13:09.055 }, 00:13:09.055 { 00:13:09.055 "dma_device_id": "system", 00:13:09.055 "dma_device_type": 1 00:13:09.055 }, 00:13:09.055 { 00:13:09.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.055 "dma_device_type": 2 00:13:09.055 }, 00:13:09.055 { 00:13:09.055 "dma_device_id": "system", 00:13:09.055 "dma_device_type": 1 00:13:09.055 }, 00:13:09.055 { 00:13:09.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.055 "dma_device_type": 2 00:13:09.055 } 00:13:09.055 ], 00:13:09.055 "driver_specific": { 00:13:09.055 "raid": { 00:13:09.055 "uuid": "2bb916df-a6d2-4203-a89f-661194490184", 00:13:09.055 "strip_size_kb": 64, 00:13:09.055 "state": "online", 00:13:09.055 "raid_level": "raid0", 00:13:09.055 "superblock": false, 00:13:09.055 "num_base_bdevs": 3, 00:13:09.055 "num_base_bdevs_discovered": 3, 00:13:09.055 "num_base_bdevs_operational": 3, 00:13:09.055 "base_bdevs_list": [ 00:13:09.055 { 00:13:09.055 "name": "NewBaseBdev", 00:13:09.055 "uuid": "20e88dda-9ca0-421b-a6e4-87289757c893", 00:13:09.055 "is_configured": true, 00:13:09.055 "data_offset": 0, 00:13:09.055 "data_size": 65536 00:13:09.055 }, 00:13:09.055 { 00:13:09.055 "name": "BaseBdev2", 00:13:09.055 "uuid": "fffa3d0d-9afb-45da-88bb-05533f4e9ea5", 00:13:09.055 "is_configured": true, 00:13:09.055 "data_offset": 0, 00:13:09.055 "data_size": 65536 00:13:09.055 }, 00:13:09.055 { 00:13:09.055 "name": "BaseBdev3", 00:13:09.055 "uuid": "88d9746e-cf40-47ba-9a32-a879e822c196", 00:13:09.055 "is_configured": true, 00:13:09.055 "data_offset": 0, 00:13:09.055 "data_size": 65536 00:13:09.055 } 00:13:09.055 ] 00:13:09.055 } 00:13:09.055 } 00:13:09.055 }' 00:13:09.055 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:09.055 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:09.055 BaseBdev2 00:13:09.055 BaseBdev3' 00:13:09.055 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:09.055 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:09.055 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:09.055 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:09.055 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:09.055 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.055 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.314 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.314 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:09.314 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:09.314 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:09.314 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:09.314 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:09.314 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.315 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.315 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.315 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:09.315 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:09.315 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:09.315 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:09.315 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:09.315 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.315 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.315 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.315 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:09.315 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:09.315 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:09.315 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.315 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.315 [2024-12-06 06:39:27.845128] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:09.315 [2024-12-06 06:39:27.845173] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:09.315 [2024-12-06 06:39:27.845279] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:09.315 [2024-12-06 06:39:27.845359] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:09.315 [2024-12-06 06:39:27.845380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:09.315 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.315 06:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63996 00:13:09.315 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63996 ']' 00:13:09.315 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63996 00:13:09.315 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:09.315 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:09.315 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63996 00:13:09.315 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:09.315 killing process with pid 63996 00:13:09.315 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:09.315 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63996' 00:13:09.315 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63996 00:13:09.315 [2024-12-06 06:39:27.886231] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:09.315 06:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63996 00:13:09.574 [2024-12-06 06:39:28.162989] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:10.948 00:13:10.948 real 0m12.005s 00:13:10.948 user 0m19.887s 00:13:10.948 sys 0m1.673s 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.948 ************************************ 00:13:10.948 END TEST raid_state_function_test 00:13:10.948 ************************************ 00:13:10.948 06:39:29 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:13:10.948 06:39:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:10.948 06:39:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.948 06:39:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:10.948 ************************************ 00:13:10.948 START TEST raid_state_function_test_sb 00:13:10.948 ************************************ 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:10.948 Process raid pid: 64638 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64638 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64638' 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64638 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64638 ']' 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:10.948 06:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.948 [2024-12-06 06:39:29.407386] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:13:10.948 [2024-12-06 06:39:29.408023] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.231 [2024-12-06 06:39:29.593804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.231 [2024-12-06 06:39:29.750778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.543 [2024-12-06 06:39:29.969491] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.543 [2024-12-06 06:39:29.969558] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.803 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:11.803 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:11.803 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:11.803 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.803 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.803 [2024-12-06 06:39:30.380253] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:11.803 [2024-12-06 06:39:30.380318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:11.803 [2024-12-06 06:39:30.380334] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:11.803 [2024-12-06 06:39:30.380350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:11.803 [2024-12-06 06:39:30.380360] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:11.803 [2024-12-06 06:39:30.380374] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:11.803 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.803 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:11.803 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:11.803 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.803 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:11.803 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.803 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:11.803 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.803 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.803 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.803 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.803 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.803 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.803 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.803 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.803 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.803 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.803 "name": "Existed_Raid", 00:13:11.803 "uuid": "364297ab-b848-4576-bc4e-987fd06b9693", 00:13:11.803 "strip_size_kb": 64, 00:13:11.803 "state": "configuring", 00:13:11.803 "raid_level": "raid0", 00:13:11.803 "superblock": true, 00:13:11.803 "num_base_bdevs": 3, 00:13:11.803 "num_base_bdevs_discovered": 0, 00:13:11.803 "num_base_bdevs_operational": 3, 00:13:11.803 "base_bdevs_list": [ 00:13:11.803 { 00:13:11.803 "name": "BaseBdev1", 00:13:11.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.803 "is_configured": false, 00:13:11.803 "data_offset": 0, 00:13:11.803 "data_size": 0 00:13:11.803 }, 00:13:11.803 { 00:13:11.803 "name": "BaseBdev2", 00:13:11.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.804 "is_configured": false, 00:13:11.804 "data_offset": 0, 00:13:11.804 "data_size": 0 00:13:11.804 }, 00:13:11.804 { 00:13:11.804 "name": "BaseBdev3", 00:13:11.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.804 "is_configured": false, 00:13:11.804 "data_offset": 0, 00:13:11.804 "data_size": 0 00:13:11.804 } 00:13:11.804 ] 00:13:11.804 }' 00:13:11.804 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.804 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.370 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:12.370 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.370 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.370 [2024-12-06 06:39:30.884404] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:12.370 [2024-12-06 06:39:30.884464] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:12.370 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.370 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:12.370 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.370 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.370 [2024-12-06 06:39:30.892409] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:12.370 [2024-12-06 06:39:30.892511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:12.370 [2024-12-06 06:39:30.892534] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:12.370 [2024-12-06 06:39:30.892576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:12.370 [2024-12-06 06:39:30.892594] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:12.370 [2024-12-06 06:39:30.892618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:12.370 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.370 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:12.370 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.370 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.370 [2024-12-06 06:39:30.939488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:12.370 BaseBdev1 00:13:12.370 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.370 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:12.370 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:12.371 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:12.371 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:12.371 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:12.371 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:12.371 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:12.371 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.371 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.371 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.371 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:12.371 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.371 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.371 [ 00:13:12.371 { 00:13:12.371 "name": "BaseBdev1", 00:13:12.371 "aliases": [ 00:13:12.371 "13895976-69c4-42ce-8c12-2b7ba5b0a149" 00:13:12.371 ], 00:13:12.371 "product_name": "Malloc disk", 00:13:12.371 "block_size": 512, 00:13:12.371 "num_blocks": 65536, 00:13:12.371 "uuid": "13895976-69c4-42ce-8c12-2b7ba5b0a149", 00:13:12.371 "assigned_rate_limits": { 00:13:12.371 "rw_ios_per_sec": 0, 00:13:12.371 "rw_mbytes_per_sec": 0, 00:13:12.371 "r_mbytes_per_sec": 0, 00:13:12.371 "w_mbytes_per_sec": 0 00:13:12.371 }, 00:13:12.371 "claimed": true, 00:13:12.371 "claim_type": "exclusive_write", 00:13:12.371 "zoned": false, 00:13:12.371 "supported_io_types": { 00:13:12.371 "read": true, 00:13:12.371 "write": true, 00:13:12.371 "unmap": true, 00:13:12.371 "flush": true, 00:13:12.371 "reset": true, 00:13:12.371 "nvme_admin": false, 00:13:12.371 "nvme_io": false, 00:13:12.371 "nvme_io_md": false, 00:13:12.371 "write_zeroes": true, 00:13:12.371 "zcopy": true, 00:13:12.371 "get_zone_info": false, 00:13:12.371 "zone_management": false, 00:13:12.371 "zone_append": false, 00:13:12.371 "compare": false, 00:13:12.371 "compare_and_write": false, 00:13:12.371 "abort": true, 00:13:12.371 "seek_hole": false, 00:13:12.371 "seek_data": false, 00:13:12.371 "copy": true, 00:13:12.371 "nvme_iov_md": false 00:13:12.371 }, 00:13:12.371 "memory_domains": [ 00:13:12.371 { 00:13:12.371 "dma_device_id": "system", 00:13:12.371 "dma_device_type": 1 00:13:12.371 }, 00:13:12.371 { 00:13:12.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.371 "dma_device_type": 2 00:13:12.371 } 00:13:12.371 ], 00:13:12.371 "driver_specific": {} 00:13:12.371 } 00:13:12.371 ] 00:13:12.371 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.371 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:12.371 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:12.371 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.371 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.371 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:12.371 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.371 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:12.371 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.371 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.371 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.371 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.371 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.371 06:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.371 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.371 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.371 06:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.631 06:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.631 "name": "Existed_Raid", 00:13:12.631 "uuid": "7a74f6d5-1257-4454-b413-2a502b3989f4", 00:13:12.631 "strip_size_kb": 64, 00:13:12.631 "state": "configuring", 00:13:12.631 "raid_level": "raid0", 00:13:12.631 "superblock": true, 00:13:12.631 "num_base_bdevs": 3, 00:13:12.631 "num_base_bdevs_discovered": 1, 00:13:12.631 "num_base_bdevs_operational": 3, 00:13:12.631 "base_bdevs_list": [ 00:13:12.631 { 00:13:12.631 "name": "BaseBdev1", 00:13:12.631 "uuid": "13895976-69c4-42ce-8c12-2b7ba5b0a149", 00:13:12.631 "is_configured": true, 00:13:12.631 "data_offset": 2048, 00:13:12.631 "data_size": 63488 00:13:12.631 }, 00:13:12.631 { 00:13:12.631 "name": "BaseBdev2", 00:13:12.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.631 "is_configured": false, 00:13:12.631 "data_offset": 0, 00:13:12.631 "data_size": 0 00:13:12.631 }, 00:13:12.631 { 00:13:12.631 "name": "BaseBdev3", 00:13:12.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.631 "is_configured": false, 00:13:12.631 "data_offset": 0, 00:13:12.631 "data_size": 0 00:13:12.631 } 00:13:12.631 ] 00:13:12.631 }' 00:13:12.631 06:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.631 06:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.890 06:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:12.890 06:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.890 06:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.890 [2024-12-06 06:39:31.479699] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:12.890 [2024-12-06 06:39:31.479891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:12.890 06:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.890 06:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:12.890 06:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.890 06:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.890 [2024-12-06 06:39:31.487746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:12.890 [2024-12-06 06:39:31.490146] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:12.890 [2024-12-06 06:39:31.490196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:12.890 [2024-12-06 06:39:31.490213] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:12.890 [2024-12-06 06:39:31.490228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:12.890 06:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.890 06:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:12.890 06:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:12.890 06:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:12.890 06:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.890 06:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.890 06:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:12.890 06:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.890 06:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:12.890 06:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.890 06:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.890 06:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.890 06:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.890 06:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.890 06:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.890 06:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.890 06:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.890 06:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.149 06:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.149 "name": "Existed_Raid", 00:13:13.149 "uuid": "8831a244-627e-4134-9b10-2da6640f382d", 00:13:13.149 "strip_size_kb": 64, 00:13:13.149 "state": "configuring", 00:13:13.149 "raid_level": "raid0", 00:13:13.149 "superblock": true, 00:13:13.149 "num_base_bdevs": 3, 00:13:13.149 "num_base_bdevs_discovered": 1, 00:13:13.149 "num_base_bdevs_operational": 3, 00:13:13.149 "base_bdevs_list": [ 00:13:13.149 { 00:13:13.149 "name": "BaseBdev1", 00:13:13.149 "uuid": "13895976-69c4-42ce-8c12-2b7ba5b0a149", 00:13:13.149 "is_configured": true, 00:13:13.149 "data_offset": 2048, 00:13:13.149 "data_size": 63488 00:13:13.149 }, 00:13:13.149 { 00:13:13.149 "name": "BaseBdev2", 00:13:13.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.149 "is_configured": false, 00:13:13.149 "data_offset": 0, 00:13:13.149 "data_size": 0 00:13:13.149 }, 00:13:13.149 { 00:13:13.149 "name": "BaseBdev3", 00:13:13.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.149 "is_configured": false, 00:13:13.149 "data_offset": 0, 00:13:13.149 "data_size": 0 00:13:13.149 } 00:13:13.149 ] 00:13:13.149 }' 00:13:13.149 06:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.149 06:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.407 06:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:13.407 06:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.408 06:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.408 [2024-12-06 06:39:32.039065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:13.408 BaseBdev2 00:13:13.408 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.408 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:13.408 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:13.408 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:13.408 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:13.408 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:13.408 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:13.408 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:13.408 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.408 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.667 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.667 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:13.667 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.667 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.667 [ 00:13:13.667 { 00:13:13.667 "name": "BaseBdev2", 00:13:13.667 "aliases": [ 00:13:13.667 "c571dd9d-6544-4f1b-af99-ac3d29f692bb" 00:13:13.667 ], 00:13:13.667 "product_name": "Malloc disk", 00:13:13.667 "block_size": 512, 00:13:13.667 "num_blocks": 65536, 00:13:13.667 "uuid": "c571dd9d-6544-4f1b-af99-ac3d29f692bb", 00:13:13.667 "assigned_rate_limits": { 00:13:13.667 "rw_ios_per_sec": 0, 00:13:13.667 "rw_mbytes_per_sec": 0, 00:13:13.667 "r_mbytes_per_sec": 0, 00:13:13.667 "w_mbytes_per_sec": 0 00:13:13.667 }, 00:13:13.667 "claimed": true, 00:13:13.667 "claim_type": "exclusive_write", 00:13:13.667 "zoned": false, 00:13:13.667 "supported_io_types": { 00:13:13.667 "read": true, 00:13:13.667 "write": true, 00:13:13.667 "unmap": true, 00:13:13.667 "flush": true, 00:13:13.667 "reset": true, 00:13:13.667 "nvme_admin": false, 00:13:13.667 "nvme_io": false, 00:13:13.667 "nvme_io_md": false, 00:13:13.667 "write_zeroes": true, 00:13:13.667 "zcopy": true, 00:13:13.667 "get_zone_info": false, 00:13:13.667 "zone_management": false, 00:13:13.667 "zone_append": false, 00:13:13.667 "compare": false, 00:13:13.667 "compare_and_write": false, 00:13:13.667 "abort": true, 00:13:13.667 "seek_hole": false, 00:13:13.667 "seek_data": false, 00:13:13.667 "copy": true, 00:13:13.667 "nvme_iov_md": false 00:13:13.667 }, 00:13:13.667 "memory_domains": [ 00:13:13.667 { 00:13:13.667 "dma_device_id": "system", 00:13:13.667 "dma_device_type": 1 00:13:13.667 }, 00:13:13.667 { 00:13:13.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.667 "dma_device_type": 2 00:13:13.667 } 00:13:13.667 ], 00:13:13.667 "driver_specific": {} 00:13:13.667 } 00:13:13.667 ] 00:13:13.667 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.667 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:13.667 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:13.667 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:13.667 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:13.667 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.667 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.667 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:13.667 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.667 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:13.667 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.667 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.667 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.667 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.667 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.667 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.667 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.667 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.667 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.667 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.667 "name": "Existed_Raid", 00:13:13.667 "uuid": "8831a244-627e-4134-9b10-2da6640f382d", 00:13:13.667 "strip_size_kb": 64, 00:13:13.667 "state": "configuring", 00:13:13.667 "raid_level": "raid0", 00:13:13.667 "superblock": true, 00:13:13.667 "num_base_bdevs": 3, 00:13:13.667 "num_base_bdevs_discovered": 2, 00:13:13.667 "num_base_bdevs_operational": 3, 00:13:13.667 "base_bdevs_list": [ 00:13:13.667 { 00:13:13.667 "name": "BaseBdev1", 00:13:13.667 "uuid": "13895976-69c4-42ce-8c12-2b7ba5b0a149", 00:13:13.667 "is_configured": true, 00:13:13.667 "data_offset": 2048, 00:13:13.667 "data_size": 63488 00:13:13.667 }, 00:13:13.667 { 00:13:13.667 "name": "BaseBdev2", 00:13:13.667 "uuid": "c571dd9d-6544-4f1b-af99-ac3d29f692bb", 00:13:13.667 "is_configured": true, 00:13:13.667 "data_offset": 2048, 00:13:13.667 "data_size": 63488 00:13:13.667 }, 00:13:13.667 { 00:13:13.667 "name": "BaseBdev3", 00:13:13.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.667 "is_configured": false, 00:13:13.667 "data_offset": 0, 00:13:13.667 "data_size": 0 00:13:13.667 } 00:13:13.667 ] 00:13:13.667 }' 00:13:13.667 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.667 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.235 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:14.235 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.235 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.235 [2024-12-06 06:39:32.639123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:14.235 [2024-12-06 06:39:32.639453] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:14.235 [2024-12-06 06:39:32.639482] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:14.235 BaseBdev3 00:13:14.235 [2024-12-06 06:39:32.639872] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:14.235 [2024-12-06 06:39:32.640075] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:14.235 [2024-12-06 06:39:32.640092] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:14.235 [2024-12-06 06:39:32.640270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.235 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.235 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:14.235 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:14.235 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:14.235 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:14.235 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:14.236 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:14.236 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:14.236 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.236 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.236 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.236 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:14.236 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.236 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.236 [ 00:13:14.236 { 00:13:14.236 "name": "BaseBdev3", 00:13:14.236 "aliases": [ 00:13:14.236 "2ec04150-2637-48dc-902a-c046e0f4da2d" 00:13:14.236 ], 00:13:14.236 "product_name": "Malloc disk", 00:13:14.236 "block_size": 512, 00:13:14.236 "num_blocks": 65536, 00:13:14.236 "uuid": "2ec04150-2637-48dc-902a-c046e0f4da2d", 00:13:14.236 "assigned_rate_limits": { 00:13:14.236 "rw_ios_per_sec": 0, 00:13:14.236 "rw_mbytes_per_sec": 0, 00:13:14.236 "r_mbytes_per_sec": 0, 00:13:14.236 "w_mbytes_per_sec": 0 00:13:14.236 }, 00:13:14.236 "claimed": true, 00:13:14.236 "claim_type": "exclusive_write", 00:13:14.236 "zoned": false, 00:13:14.236 "supported_io_types": { 00:13:14.236 "read": true, 00:13:14.236 "write": true, 00:13:14.236 "unmap": true, 00:13:14.236 "flush": true, 00:13:14.236 "reset": true, 00:13:14.236 "nvme_admin": false, 00:13:14.236 "nvme_io": false, 00:13:14.236 "nvme_io_md": false, 00:13:14.236 "write_zeroes": true, 00:13:14.236 "zcopy": true, 00:13:14.236 "get_zone_info": false, 00:13:14.236 "zone_management": false, 00:13:14.236 "zone_append": false, 00:13:14.236 "compare": false, 00:13:14.236 "compare_and_write": false, 00:13:14.236 "abort": true, 00:13:14.236 "seek_hole": false, 00:13:14.236 "seek_data": false, 00:13:14.236 "copy": true, 00:13:14.236 "nvme_iov_md": false 00:13:14.236 }, 00:13:14.236 "memory_domains": [ 00:13:14.236 { 00:13:14.236 "dma_device_id": "system", 00:13:14.236 "dma_device_type": 1 00:13:14.236 }, 00:13:14.236 { 00:13:14.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.236 "dma_device_type": 2 00:13:14.236 } 00:13:14.236 ], 00:13:14.236 "driver_specific": {} 00:13:14.236 } 00:13:14.236 ] 00:13:14.236 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.236 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:14.236 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:14.236 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:14.236 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:13:14.236 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.236 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.236 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:14.236 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.236 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.236 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.236 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.236 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.236 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.236 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.236 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.236 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.236 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.236 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.236 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.236 "name": "Existed_Raid", 00:13:14.236 "uuid": "8831a244-627e-4134-9b10-2da6640f382d", 00:13:14.236 "strip_size_kb": 64, 00:13:14.236 "state": "online", 00:13:14.236 "raid_level": "raid0", 00:13:14.236 "superblock": true, 00:13:14.236 "num_base_bdevs": 3, 00:13:14.236 "num_base_bdevs_discovered": 3, 00:13:14.236 "num_base_bdevs_operational": 3, 00:13:14.236 "base_bdevs_list": [ 00:13:14.236 { 00:13:14.236 "name": "BaseBdev1", 00:13:14.236 "uuid": "13895976-69c4-42ce-8c12-2b7ba5b0a149", 00:13:14.236 "is_configured": true, 00:13:14.236 "data_offset": 2048, 00:13:14.236 "data_size": 63488 00:13:14.236 }, 00:13:14.236 { 00:13:14.236 "name": "BaseBdev2", 00:13:14.236 "uuid": "c571dd9d-6544-4f1b-af99-ac3d29f692bb", 00:13:14.236 "is_configured": true, 00:13:14.236 "data_offset": 2048, 00:13:14.236 "data_size": 63488 00:13:14.236 }, 00:13:14.236 { 00:13:14.236 "name": "BaseBdev3", 00:13:14.236 "uuid": "2ec04150-2637-48dc-902a-c046e0f4da2d", 00:13:14.236 "is_configured": true, 00:13:14.236 "data_offset": 2048, 00:13:14.236 "data_size": 63488 00:13:14.236 } 00:13:14.236 ] 00:13:14.236 }' 00:13:14.236 06:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.236 06:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.804 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:14.804 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:14.804 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:14.804 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:14.804 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:14.804 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:14.804 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:14.804 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:14.804 06:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.804 06:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.804 [2024-12-06 06:39:33.211705] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:14.804 06:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.804 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:14.804 "name": "Existed_Raid", 00:13:14.804 "aliases": [ 00:13:14.804 "8831a244-627e-4134-9b10-2da6640f382d" 00:13:14.804 ], 00:13:14.805 "product_name": "Raid Volume", 00:13:14.805 "block_size": 512, 00:13:14.805 "num_blocks": 190464, 00:13:14.805 "uuid": "8831a244-627e-4134-9b10-2da6640f382d", 00:13:14.805 "assigned_rate_limits": { 00:13:14.805 "rw_ios_per_sec": 0, 00:13:14.805 "rw_mbytes_per_sec": 0, 00:13:14.805 "r_mbytes_per_sec": 0, 00:13:14.805 "w_mbytes_per_sec": 0 00:13:14.805 }, 00:13:14.805 "claimed": false, 00:13:14.805 "zoned": false, 00:13:14.805 "supported_io_types": { 00:13:14.805 "read": true, 00:13:14.805 "write": true, 00:13:14.805 "unmap": true, 00:13:14.805 "flush": true, 00:13:14.805 "reset": true, 00:13:14.805 "nvme_admin": false, 00:13:14.805 "nvme_io": false, 00:13:14.805 "nvme_io_md": false, 00:13:14.805 "write_zeroes": true, 00:13:14.805 "zcopy": false, 00:13:14.805 "get_zone_info": false, 00:13:14.805 "zone_management": false, 00:13:14.805 "zone_append": false, 00:13:14.805 "compare": false, 00:13:14.805 "compare_and_write": false, 00:13:14.805 "abort": false, 00:13:14.805 "seek_hole": false, 00:13:14.805 "seek_data": false, 00:13:14.805 "copy": false, 00:13:14.805 "nvme_iov_md": false 00:13:14.805 }, 00:13:14.805 "memory_domains": [ 00:13:14.805 { 00:13:14.805 "dma_device_id": "system", 00:13:14.805 "dma_device_type": 1 00:13:14.805 }, 00:13:14.805 { 00:13:14.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.805 "dma_device_type": 2 00:13:14.805 }, 00:13:14.805 { 00:13:14.805 "dma_device_id": "system", 00:13:14.805 "dma_device_type": 1 00:13:14.805 }, 00:13:14.805 { 00:13:14.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.805 "dma_device_type": 2 00:13:14.805 }, 00:13:14.805 { 00:13:14.805 "dma_device_id": "system", 00:13:14.805 "dma_device_type": 1 00:13:14.805 }, 00:13:14.805 { 00:13:14.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.805 "dma_device_type": 2 00:13:14.805 } 00:13:14.805 ], 00:13:14.805 "driver_specific": { 00:13:14.805 "raid": { 00:13:14.805 "uuid": "8831a244-627e-4134-9b10-2da6640f382d", 00:13:14.805 "strip_size_kb": 64, 00:13:14.805 "state": "online", 00:13:14.805 "raid_level": "raid0", 00:13:14.805 "superblock": true, 00:13:14.805 "num_base_bdevs": 3, 00:13:14.805 "num_base_bdevs_discovered": 3, 00:13:14.805 "num_base_bdevs_operational": 3, 00:13:14.805 "base_bdevs_list": [ 00:13:14.805 { 00:13:14.805 "name": "BaseBdev1", 00:13:14.805 "uuid": "13895976-69c4-42ce-8c12-2b7ba5b0a149", 00:13:14.805 "is_configured": true, 00:13:14.805 "data_offset": 2048, 00:13:14.805 "data_size": 63488 00:13:14.805 }, 00:13:14.805 { 00:13:14.805 "name": "BaseBdev2", 00:13:14.805 "uuid": "c571dd9d-6544-4f1b-af99-ac3d29f692bb", 00:13:14.805 "is_configured": true, 00:13:14.805 "data_offset": 2048, 00:13:14.805 "data_size": 63488 00:13:14.805 }, 00:13:14.805 { 00:13:14.805 "name": "BaseBdev3", 00:13:14.805 "uuid": "2ec04150-2637-48dc-902a-c046e0f4da2d", 00:13:14.805 "is_configured": true, 00:13:14.805 "data_offset": 2048, 00:13:14.805 "data_size": 63488 00:13:14.805 } 00:13:14.805 ] 00:13:14.805 } 00:13:14.805 } 00:13:14.805 }' 00:13:14.805 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:14.805 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:14.805 BaseBdev2 00:13:14.805 BaseBdev3' 00:13:14.805 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.805 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:14.805 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:14.805 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:14.805 06:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.805 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.805 06:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.805 06:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.805 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:14.805 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:14.805 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:14.805 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.805 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:14.805 06:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.805 06:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.805 06:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.064 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:15.064 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:15.064 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:15.064 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:15.064 06:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.064 06:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.064 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:15.064 06:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.064 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:15.064 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:15.064 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:15.065 06:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.065 06:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.065 [2024-12-06 06:39:33.527465] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:15.065 [2024-12-06 06:39:33.527498] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:15.065 [2024-12-06 06:39:33.527621] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:15.065 06:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.065 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:15.065 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:15.065 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:15.065 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:15.065 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:15.065 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:13:15.065 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.065 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:15.065 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:15.065 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.065 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:15.065 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.065 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.065 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.065 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.065 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.065 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.065 06:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.065 06:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.065 06:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.065 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.065 "name": "Existed_Raid", 00:13:15.065 "uuid": "8831a244-627e-4134-9b10-2da6640f382d", 00:13:15.065 "strip_size_kb": 64, 00:13:15.065 "state": "offline", 00:13:15.065 "raid_level": "raid0", 00:13:15.065 "superblock": true, 00:13:15.065 "num_base_bdevs": 3, 00:13:15.065 "num_base_bdevs_discovered": 2, 00:13:15.065 "num_base_bdevs_operational": 2, 00:13:15.065 "base_bdevs_list": [ 00:13:15.065 { 00:13:15.065 "name": null, 00:13:15.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.065 "is_configured": false, 00:13:15.065 "data_offset": 0, 00:13:15.065 "data_size": 63488 00:13:15.065 }, 00:13:15.065 { 00:13:15.065 "name": "BaseBdev2", 00:13:15.065 "uuid": "c571dd9d-6544-4f1b-af99-ac3d29f692bb", 00:13:15.065 "is_configured": true, 00:13:15.065 "data_offset": 2048, 00:13:15.065 "data_size": 63488 00:13:15.065 }, 00:13:15.065 { 00:13:15.065 "name": "BaseBdev3", 00:13:15.065 "uuid": "2ec04150-2637-48dc-902a-c046e0f4da2d", 00:13:15.065 "is_configured": true, 00:13:15.065 "data_offset": 2048, 00:13:15.065 "data_size": 63488 00:13:15.065 } 00:13:15.065 ] 00:13:15.065 }' 00:13:15.065 06:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.065 06:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.659 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:15.659 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:15.659 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.659 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.659 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:15.659 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.659 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.659 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:15.659 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:15.659 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:15.659 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.659 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.659 [2024-12-06 06:39:34.196424] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:15.659 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.659 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:15.659 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:15.659 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.659 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.659 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.659 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:15.659 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.919 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:15.919 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:15.919 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:15.919 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.919 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.919 [2024-12-06 06:39:34.339895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:15.919 [2024-12-06 06:39:34.340004] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:15.919 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.919 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:15.919 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:15.919 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.919 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.919 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.920 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:15.920 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.920 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:15.920 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:15.920 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:15.920 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:15.920 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:15.920 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:15.920 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.920 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.920 BaseBdev2 00:13:15.920 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.920 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:15.920 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:15.920 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:15.920 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:15.920 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:15.920 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:15.920 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:15.920 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.920 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.920 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.920 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:15.920 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.920 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.920 [ 00:13:15.920 { 00:13:15.920 "name": "BaseBdev2", 00:13:15.920 "aliases": [ 00:13:15.920 "7f02d366-500a-4f8f-8c0a-de4bdbb909ab" 00:13:15.920 ], 00:13:15.920 "product_name": "Malloc disk", 00:13:15.920 "block_size": 512, 00:13:15.920 "num_blocks": 65536, 00:13:15.920 "uuid": "7f02d366-500a-4f8f-8c0a-de4bdbb909ab", 00:13:15.920 "assigned_rate_limits": { 00:13:15.920 "rw_ios_per_sec": 0, 00:13:15.920 "rw_mbytes_per_sec": 0, 00:13:15.920 "r_mbytes_per_sec": 0, 00:13:15.920 "w_mbytes_per_sec": 0 00:13:15.920 }, 00:13:15.920 "claimed": false, 00:13:15.920 "zoned": false, 00:13:15.920 "supported_io_types": { 00:13:15.920 "read": true, 00:13:15.920 "write": true, 00:13:15.920 "unmap": true, 00:13:15.920 "flush": true, 00:13:15.920 "reset": true, 00:13:15.920 "nvme_admin": false, 00:13:15.920 "nvme_io": false, 00:13:15.920 "nvme_io_md": false, 00:13:15.920 "write_zeroes": true, 00:13:15.920 "zcopy": true, 00:13:15.920 "get_zone_info": false, 00:13:15.920 "zone_management": false, 00:13:15.920 "zone_append": false, 00:13:15.920 "compare": false, 00:13:15.920 "compare_and_write": false, 00:13:15.920 "abort": true, 00:13:15.920 "seek_hole": false, 00:13:16.179 "seek_data": false, 00:13:16.179 "copy": true, 00:13:16.179 "nvme_iov_md": false 00:13:16.179 }, 00:13:16.179 "memory_domains": [ 00:13:16.179 { 00:13:16.179 "dma_device_id": "system", 00:13:16.179 "dma_device_type": 1 00:13:16.179 }, 00:13:16.179 { 00:13:16.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.179 "dma_device_type": 2 00:13:16.179 } 00:13:16.179 ], 00:13:16.179 "driver_specific": {} 00:13:16.179 } 00:13:16.179 ] 00:13:16.179 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.179 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:16.179 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:16.179 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:16.179 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:16.179 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.179 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.179 BaseBdev3 00:13:16.179 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.179 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:16.179 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:16.179 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:16.179 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:16.179 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:16.179 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:16.179 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:16.179 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.179 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.179 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.179 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:16.179 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.179 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.179 [ 00:13:16.179 { 00:13:16.179 "name": "BaseBdev3", 00:13:16.179 "aliases": [ 00:13:16.179 "0803d2d4-1953-4a25-a566-deaed5609e1d" 00:13:16.179 ], 00:13:16.179 "product_name": "Malloc disk", 00:13:16.179 "block_size": 512, 00:13:16.179 "num_blocks": 65536, 00:13:16.179 "uuid": "0803d2d4-1953-4a25-a566-deaed5609e1d", 00:13:16.179 "assigned_rate_limits": { 00:13:16.179 "rw_ios_per_sec": 0, 00:13:16.179 "rw_mbytes_per_sec": 0, 00:13:16.179 "r_mbytes_per_sec": 0, 00:13:16.179 "w_mbytes_per_sec": 0 00:13:16.179 }, 00:13:16.179 "claimed": false, 00:13:16.179 "zoned": false, 00:13:16.179 "supported_io_types": { 00:13:16.179 "read": true, 00:13:16.179 "write": true, 00:13:16.179 "unmap": true, 00:13:16.179 "flush": true, 00:13:16.179 "reset": true, 00:13:16.179 "nvme_admin": false, 00:13:16.179 "nvme_io": false, 00:13:16.179 "nvme_io_md": false, 00:13:16.179 "write_zeroes": true, 00:13:16.179 "zcopy": true, 00:13:16.179 "get_zone_info": false, 00:13:16.179 "zone_management": false, 00:13:16.179 "zone_append": false, 00:13:16.179 "compare": false, 00:13:16.179 "compare_and_write": false, 00:13:16.179 "abort": true, 00:13:16.179 "seek_hole": false, 00:13:16.179 "seek_data": false, 00:13:16.179 "copy": true, 00:13:16.179 "nvme_iov_md": false 00:13:16.179 }, 00:13:16.179 "memory_domains": [ 00:13:16.179 { 00:13:16.179 "dma_device_id": "system", 00:13:16.179 "dma_device_type": 1 00:13:16.179 }, 00:13:16.179 { 00:13:16.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.179 "dma_device_type": 2 00:13:16.179 } 00:13:16.179 ], 00:13:16.179 "driver_specific": {} 00:13:16.179 } 00:13:16.179 ] 00:13:16.179 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.179 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:16.179 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:16.179 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:16.180 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:16.180 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.180 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.180 [2024-12-06 06:39:34.650282] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:16.180 [2024-12-06 06:39:34.650468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:16.180 [2024-12-06 06:39:34.650520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:16.180 [2024-12-06 06:39:34.652988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:16.180 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.180 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:16.180 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.180 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.180 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:16.180 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.180 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.180 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.180 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.180 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.180 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.180 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.180 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.180 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.180 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.180 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.180 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.180 "name": "Existed_Raid", 00:13:16.180 "uuid": "33484285-530f-4a4b-9bc8-c63f59058bbb", 00:13:16.180 "strip_size_kb": 64, 00:13:16.180 "state": "configuring", 00:13:16.180 "raid_level": "raid0", 00:13:16.180 "superblock": true, 00:13:16.180 "num_base_bdevs": 3, 00:13:16.180 "num_base_bdevs_discovered": 2, 00:13:16.180 "num_base_bdevs_operational": 3, 00:13:16.180 "base_bdevs_list": [ 00:13:16.180 { 00:13:16.180 "name": "BaseBdev1", 00:13:16.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.180 "is_configured": false, 00:13:16.180 "data_offset": 0, 00:13:16.180 "data_size": 0 00:13:16.180 }, 00:13:16.180 { 00:13:16.180 "name": "BaseBdev2", 00:13:16.180 "uuid": "7f02d366-500a-4f8f-8c0a-de4bdbb909ab", 00:13:16.180 "is_configured": true, 00:13:16.180 "data_offset": 2048, 00:13:16.180 "data_size": 63488 00:13:16.180 }, 00:13:16.180 { 00:13:16.180 "name": "BaseBdev3", 00:13:16.180 "uuid": "0803d2d4-1953-4a25-a566-deaed5609e1d", 00:13:16.180 "is_configured": true, 00:13:16.180 "data_offset": 2048, 00:13:16.180 "data_size": 63488 00:13:16.180 } 00:13:16.180 ] 00:13:16.180 }' 00:13:16.180 06:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.180 06:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.746 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:16.747 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.747 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.747 [2024-12-06 06:39:35.158422] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:16.747 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.747 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:16.747 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.747 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.747 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:16.747 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.747 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.747 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.747 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.747 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.747 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.747 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.747 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.747 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.747 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.747 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.747 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.747 "name": "Existed_Raid", 00:13:16.747 "uuid": "33484285-530f-4a4b-9bc8-c63f59058bbb", 00:13:16.747 "strip_size_kb": 64, 00:13:16.747 "state": "configuring", 00:13:16.747 "raid_level": "raid0", 00:13:16.747 "superblock": true, 00:13:16.747 "num_base_bdevs": 3, 00:13:16.747 "num_base_bdevs_discovered": 1, 00:13:16.747 "num_base_bdevs_operational": 3, 00:13:16.747 "base_bdevs_list": [ 00:13:16.747 { 00:13:16.747 "name": "BaseBdev1", 00:13:16.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.747 "is_configured": false, 00:13:16.747 "data_offset": 0, 00:13:16.747 "data_size": 0 00:13:16.747 }, 00:13:16.747 { 00:13:16.747 "name": null, 00:13:16.747 "uuid": "7f02d366-500a-4f8f-8c0a-de4bdbb909ab", 00:13:16.747 "is_configured": false, 00:13:16.747 "data_offset": 0, 00:13:16.747 "data_size": 63488 00:13:16.747 }, 00:13:16.747 { 00:13:16.747 "name": "BaseBdev3", 00:13:16.747 "uuid": "0803d2d4-1953-4a25-a566-deaed5609e1d", 00:13:16.747 "is_configured": true, 00:13:16.747 "data_offset": 2048, 00:13:16.747 "data_size": 63488 00:13:16.747 } 00:13:16.747 ] 00:13:16.747 }' 00:13:16.747 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.747 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.313 [2024-12-06 06:39:35.741509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:17.313 BaseBdev1 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.313 [ 00:13:17.313 { 00:13:17.313 "name": "BaseBdev1", 00:13:17.313 "aliases": [ 00:13:17.313 "8c651837-f6d5-49f6-ac77-223400f94dce" 00:13:17.313 ], 00:13:17.313 "product_name": "Malloc disk", 00:13:17.313 "block_size": 512, 00:13:17.313 "num_blocks": 65536, 00:13:17.313 "uuid": "8c651837-f6d5-49f6-ac77-223400f94dce", 00:13:17.313 "assigned_rate_limits": { 00:13:17.313 "rw_ios_per_sec": 0, 00:13:17.313 "rw_mbytes_per_sec": 0, 00:13:17.313 "r_mbytes_per_sec": 0, 00:13:17.313 "w_mbytes_per_sec": 0 00:13:17.313 }, 00:13:17.313 "claimed": true, 00:13:17.313 "claim_type": "exclusive_write", 00:13:17.313 "zoned": false, 00:13:17.313 "supported_io_types": { 00:13:17.313 "read": true, 00:13:17.313 "write": true, 00:13:17.313 "unmap": true, 00:13:17.313 "flush": true, 00:13:17.313 "reset": true, 00:13:17.313 "nvme_admin": false, 00:13:17.313 "nvme_io": false, 00:13:17.313 "nvme_io_md": false, 00:13:17.313 "write_zeroes": true, 00:13:17.313 "zcopy": true, 00:13:17.313 "get_zone_info": false, 00:13:17.313 "zone_management": false, 00:13:17.313 "zone_append": false, 00:13:17.313 "compare": false, 00:13:17.313 "compare_and_write": false, 00:13:17.313 "abort": true, 00:13:17.313 "seek_hole": false, 00:13:17.313 "seek_data": false, 00:13:17.313 "copy": true, 00:13:17.313 "nvme_iov_md": false 00:13:17.313 }, 00:13:17.313 "memory_domains": [ 00:13:17.313 { 00:13:17.313 "dma_device_id": "system", 00:13:17.313 "dma_device_type": 1 00:13:17.313 }, 00:13:17.313 { 00:13:17.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.313 "dma_device_type": 2 00:13:17.313 } 00:13:17.313 ], 00:13:17.313 "driver_specific": {} 00:13:17.313 } 00:13:17.313 ] 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.313 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.314 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.314 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.314 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.314 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.314 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.314 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.314 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.314 "name": "Existed_Raid", 00:13:17.314 "uuid": "33484285-530f-4a4b-9bc8-c63f59058bbb", 00:13:17.314 "strip_size_kb": 64, 00:13:17.314 "state": "configuring", 00:13:17.314 "raid_level": "raid0", 00:13:17.314 "superblock": true, 00:13:17.314 "num_base_bdevs": 3, 00:13:17.314 "num_base_bdevs_discovered": 2, 00:13:17.314 "num_base_bdevs_operational": 3, 00:13:17.314 "base_bdevs_list": [ 00:13:17.314 { 00:13:17.314 "name": "BaseBdev1", 00:13:17.314 "uuid": "8c651837-f6d5-49f6-ac77-223400f94dce", 00:13:17.314 "is_configured": true, 00:13:17.314 "data_offset": 2048, 00:13:17.314 "data_size": 63488 00:13:17.314 }, 00:13:17.314 { 00:13:17.314 "name": null, 00:13:17.314 "uuid": "7f02d366-500a-4f8f-8c0a-de4bdbb909ab", 00:13:17.314 "is_configured": false, 00:13:17.314 "data_offset": 0, 00:13:17.314 "data_size": 63488 00:13:17.314 }, 00:13:17.314 { 00:13:17.314 "name": "BaseBdev3", 00:13:17.314 "uuid": "0803d2d4-1953-4a25-a566-deaed5609e1d", 00:13:17.314 "is_configured": true, 00:13:17.314 "data_offset": 2048, 00:13:17.314 "data_size": 63488 00:13:17.314 } 00:13:17.314 ] 00:13:17.314 }' 00:13:17.314 06:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.314 06:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.880 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.880 06:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.880 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:17.880 06:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.880 06:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.880 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:17.880 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:17.880 06:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.880 06:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.880 [2024-12-06 06:39:36.305733] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:17.880 06:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.880 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:17.880 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.880 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.880 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:17.880 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.880 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:17.880 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.880 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.880 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.880 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.880 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.880 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.880 06:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.880 06:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.880 06:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.880 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.880 "name": "Existed_Raid", 00:13:17.880 "uuid": "33484285-530f-4a4b-9bc8-c63f59058bbb", 00:13:17.880 "strip_size_kb": 64, 00:13:17.880 "state": "configuring", 00:13:17.880 "raid_level": "raid0", 00:13:17.880 "superblock": true, 00:13:17.880 "num_base_bdevs": 3, 00:13:17.880 "num_base_bdevs_discovered": 1, 00:13:17.880 "num_base_bdevs_operational": 3, 00:13:17.880 "base_bdevs_list": [ 00:13:17.880 { 00:13:17.880 "name": "BaseBdev1", 00:13:17.880 "uuid": "8c651837-f6d5-49f6-ac77-223400f94dce", 00:13:17.880 "is_configured": true, 00:13:17.880 "data_offset": 2048, 00:13:17.880 "data_size": 63488 00:13:17.880 }, 00:13:17.880 { 00:13:17.880 "name": null, 00:13:17.880 "uuid": "7f02d366-500a-4f8f-8c0a-de4bdbb909ab", 00:13:17.880 "is_configured": false, 00:13:17.880 "data_offset": 0, 00:13:17.880 "data_size": 63488 00:13:17.880 }, 00:13:17.880 { 00:13:17.880 "name": null, 00:13:17.880 "uuid": "0803d2d4-1953-4a25-a566-deaed5609e1d", 00:13:17.880 "is_configured": false, 00:13:17.880 "data_offset": 0, 00:13:17.880 "data_size": 63488 00:13:17.880 } 00:13:17.880 ] 00:13:17.880 }' 00:13:17.880 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.880 06:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.447 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:18.447 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.447 06:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.447 06:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.447 06:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.447 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:18.447 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:18.447 06:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.448 06:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.448 [2024-12-06 06:39:36.869959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:18.448 06:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.448 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:18.448 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.448 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.448 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:18.448 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.448 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.448 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.448 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.448 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.448 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.448 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.448 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.448 06:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.448 06:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.448 06:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.448 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.448 "name": "Existed_Raid", 00:13:18.448 "uuid": "33484285-530f-4a4b-9bc8-c63f59058bbb", 00:13:18.448 "strip_size_kb": 64, 00:13:18.448 "state": "configuring", 00:13:18.448 "raid_level": "raid0", 00:13:18.448 "superblock": true, 00:13:18.448 "num_base_bdevs": 3, 00:13:18.448 "num_base_bdevs_discovered": 2, 00:13:18.448 "num_base_bdevs_operational": 3, 00:13:18.448 "base_bdevs_list": [ 00:13:18.448 { 00:13:18.448 "name": "BaseBdev1", 00:13:18.448 "uuid": "8c651837-f6d5-49f6-ac77-223400f94dce", 00:13:18.448 "is_configured": true, 00:13:18.448 "data_offset": 2048, 00:13:18.448 "data_size": 63488 00:13:18.448 }, 00:13:18.448 { 00:13:18.448 "name": null, 00:13:18.448 "uuid": "7f02d366-500a-4f8f-8c0a-de4bdbb909ab", 00:13:18.448 "is_configured": false, 00:13:18.448 "data_offset": 0, 00:13:18.448 "data_size": 63488 00:13:18.448 }, 00:13:18.448 { 00:13:18.448 "name": "BaseBdev3", 00:13:18.448 "uuid": "0803d2d4-1953-4a25-a566-deaed5609e1d", 00:13:18.448 "is_configured": true, 00:13:18.448 "data_offset": 2048, 00:13:18.448 "data_size": 63488 00:13:18.448 } 00:13:18.448 ] 00:13:18.448 }' 00:13:18.448 06:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.448 06:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.015 06:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.015 06:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.015 06:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.015 06:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:19.015 06:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.015 06:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:19.015 06:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:19.015 06:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.015 06:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.015 [2024-12-06 06:39:37.454133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:19.015 06:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.015 06:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:19.015 06:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.015 06:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.015 06:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:19.015 06:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.015 06:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:19.015 06:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.015 06:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.015 06:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.015 06:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.015 06:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.015 06:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.015 06:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.015 06:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.015 06:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.015 06:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.015 "name": "Existed_Raid", 00:13:19.015 "uuid": "33484285-530f-4a4b-9bc8-c63f59058bbb", 00:13:19.015 "strip_size_kb": 64, 00:13:19.015 "state": "configuring", 00:13:19.015 "raid_level": "raid0", 00:13:19.015 "superblock": true, 00:13:19.015 "num_base_bdevs": 3, 00:13:19.015 "num_base_bdevs_discovered": 1, 00:13:19.015 "num_base_bdevs_operational": 3, 00:13:19.015 "base_bdevs_list": [ 00:13:19.015 { 00:13:19.015 "name": null, 00:13:19.015 "uuid": "8c651837-f6d5-49f6-ac77-223400f94dce", 00:13:19.015 "is_configured": false, 00:13:19.015 "data_offset": 0, 00:13:19.015 "data_size": 63488 00:13:19.015 }, 00:13:19.015 { 00:13:19.015 "name": null, 00:13:19.015 "uuid": "7f02d366-500a-4f8f-8c0a-de4bdbb909ab", 00:13:19.015 "is_configured": false, 00:13:19.015 "data_offset": 0, 00:13:19.015 "data_size": 63488 00:13:19.015 }, 00:13:19.015 { 00:13:19.015 "name": "BaseBdev3", 00:13:19.015 "uuid": "0803d2d4-1953-4a25-a566-deaed5609e1d", 00:13:19.015 "is_configured": true, 00:13:19.015 "data_offset": 2048, 00:13:19.015 "data_size": 63488 00:13:19.015 } 00:13:19.015 ] 00:13:19.015 }' 00:13:19.015 06:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.015 06:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.584 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.584 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.584 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.584 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:19.584 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.584 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:19.584 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:19.584 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.584 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.584 [2024-12-06 06:39:38.131209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:19.584 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.584 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:19.584 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.584 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.584 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:19.584 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.584 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:19.584 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.584 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.584 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.584 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.584 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.584 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.584 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.584 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.584 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.584 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.584 "name": "Existed_Raid", 00:13:19.584 "uuid": "33484285-530f-4a4b-9bc8-c63f59058bbb", 00:13:19.584 "strip_size_kb": 64, 00:13:19.584 "state": "configuring", 00:13:19.584 "raid_level": "raid0", 00:13:19.584 "superblock": true, 00:13:19.584 "num_base_bdevs": 3, 00:13:19.584 "num_base_bdevs_discovered": 2, 00:13:19.584 "num_base_bdevs_operational": 3, 00:13:19.584 "base_bdevs_list": [ 00:13:19.584 { 00:13:19.584 "name": null, 00:13:19.584 "uuid": "8c651837-f6d5-49f6-ac77-223400f94dce", 00:13:19.584 "is_configured": false, 00:13:19.584 "data_offset": 0, 00:13:19.584 "data_size": 63488 00:13:19.584 }, 00:13:19.584 { 00:13:19.584 "name": "BaseBdev2", 00:13:19.584 "uuid": "7f02d366-500a-4f8f-8c0a-de4bdbb909ab", 00:13:19.584 "is_configured": true, 00:13:19.584 "data_offset": 2048, 00:13:19.584 "data_size": 63488 00:13:19.584 }, 00:13:19.584 { 00:13:19.584 "name": "BaseBdev3", 00:13:19.584 "uuid": "0803d2d4-1953-4a25-a566-deaed5609e1d", 00:13:19.584 "is_configured": true, 00:13:19.584 "data_offset": 2048, 00:13:19.584 "data_size": 63488 00:13:19.584 } 00:13:19.584 ] 00:13:19.584 }' 00:13:19.584 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.584 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.152 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.152 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.152 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:20.152 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.152 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.152 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:20.152 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.152 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:20.152 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.152 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.152 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.152 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8c651837-f6d5-49f6-ac77-223400f94dce 00:13:20.152 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.152 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.412 [2024-12-06 06:39:38.799070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:20.412 [2024-12-06 06:39:38.799330] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:20.412 [2024-12-06 06:39:38.799354] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:20.412 [2024-12-06 06:39:38.799674] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:20.412 [2024-12-06 06:39:38.799858] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:20.412 [2024-12-06 06:39:38.799874] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:20.412 NewBaseBdev 00:13:20.412 [2024-12-06 06:39:38.800038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.412 [ 00:13:20.412 { 00:13:20.412 "name": "NewBaseBdev", 00:13:20.412 "aliases": [ 00:13:20.412 "8c651837-f6d5-49f6-ac77-223400f94dce" 00:13:20.412 ], 00:13:20.412 "product_name": "Malloc disk", 00:13:20.412 "block_size": 512, 00:13:20.412 "num_blocks": 65536, 00:13:20.412 "uuid": "8c651837-f6d5-49f6-ac77-223400f94dce", 00:13:20.412 "assigned_rate_limits": { 00:13:20.412 "rw_ios_per_sec": 0, 00:13:20.412 "rw_mbytes_per_sec": 0, 00:13:20.412 "r_mbytes_per_sec": 0, 00:13:20.412 "w_mbytes_per_sec": 0 00:13:20.412 }, 00:13:20.412 "claimed": true, 00:13:20.412 "claim_type": "exclusive_write", 00:13:20.412 "zoned": false, 00:13:20.412 "supported_io_types": { 00:13:20.412 "read": true, 00:13:20.412 "write": true, 00:13:20.412 "unmap": true, 00:13:20.412 "flush": true, 00:13:20.412 "reset": true, 00:13:20.412 "nvme_admin": false, 00:13:20.412 "nvme_io": false, 00:13:20.412 "nvme_io_md": false, 00:13:20.412 "write_zeroes": true, 00:13:20.412 "zcopy": true, 00:13:20.412 "get_zone_info": false, 00:13:20.412 "zone_management": false, 00:13:20.412 "zone_append": false, 00:13:20.412 "compare": false, 00:13:20.412 "compare_and_write": false, 00:13:20.412 "abort": true, 00:13:20.412 "seek_hole": false, 00:13:20.412 "seek_data": false, 00:13:20.412 "copy": true, 00:13:20.412 "nvme_iov_md": false 00:13:20.412 }, 00:13:20.412 "memory_domains": [ 00:13:20.412 { 00:13:20.412 "dma_device_id": "system", 00:13:20.412 "dma_device_type": 1 00:13:20.412 }, 00:13:20.412 { 00:13:20.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.412 "dma_device_type": 2 00:13:20.412 } 00:13:20.412 ], 00:13:20.412 "driver_specific": {} 00:13:20.412 } 00:13:20.412 ] 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.412 "name": "Existed_Raid", 00:13:20.412 "uuid": "33484285-530f-4a4b-9bc8-c63f59058bbb", 00:13:20.412 "strip_size_kb": 64, 00:13:20.412 "state": "online", 00:13:20.412 "raid_level": "raid0", 00:13:20.412 "superblock": true, 00:13:20.412 "num_base_bdevs": 3, 00:13:20.412 "num_base_bdevs_discovered": 3, 00:13:20.412 "num_base_bdevs_operational": 3, 00:13:20.412 "base_bdevs_list": [ 00:13:20.412 { 00:13:20.412 "name": "NewBaseBdev", 00:13:20.412 "uuid": "8c651837-f6d5-49f6-ac77-223400f94dce", 00:13:20.412 "is_configured": true, 00:13:20.412 "data_offset": 2048, 00:13:20.412 "data_size": 63488 00:13:20.412 }, 00:13:20.412 { 00:13:20.412 "name": "BaseBdev2", 00:13:20.412 "uuid": "7f02d366-500a-4f8f-8c0a-de4bdbb909ab", 00:13:20.412 "is_configured": true, 00:13:20.412 "data_offset": 2048, 00:13:20.412 "data_size": 63488 00:13:20.412 }, 00:13:20.412 { 00:13:20.412 "name": "BaseBdev3", 00:13:20.412 "uuid": "0803d2d4-1953-4a25-a566-deaed5609e1d", 00:13:20.412 "is_configured": true, 00:13:20.412 "data_offset": 2048, 00:13:20.412 "data_size": 63488 00:13:20.412 } 00:13:20.412 ] 00:13:20.412 }' 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.412 06:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.980 06:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:20.980 06:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:20.980 06:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:20.980 06:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:20.980 06:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:20.980 06:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:20.980 06:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:20.980 06:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.980 06:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:20.980 06:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.980 [2024-12-06 06:39:39.359683] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:20.980 06:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.980 06:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:20.980 "name": "Existed_Raid", 00:13:20.980 "aliases": [ 00:13:20.980 "33484285-530f-4a4b-9bc8-c63f59058bbb" 00:13:20.980 ], 00:13:20.980 "product_name": "Raid Volume", 00:13:20.980 "block_size": 512, 00:13:20.980 "num_blocks": 190464, 00:13:20.980 "uuid": "33484285-530f-4a4b-9bc8-c63f59058bbb", 00:13:20.980 "assigned_rate_limits": { 00:13:20.980 "rw_ios_per_sec": 0, 00:13:20.980 "rw_mbytes_per_sec": 0, 00:13:20.981 "r_mbytes_per_sec": 0, 00:13:20.981 "w_mbytes_per_sec": 0 00:13:20.981 }, 00:13:20.981 "claimed": false, 00:13:20.981 "zoned": false, 00:13:20.981 "supported_io_types": { 00:13:20.981 "read": true, 00:13:20.981 "write": true, 00:13:20.981 "unmap": true, 00:13:20.981 "flush": true, 00:13:20.981 "reset": true, 00:13:20.981 "nvme_admin": false, 00:13:20.981 "nvme_io": false, 00:13:20.981 "nvme_io_md": false, 00:13:20.981 "write_zeroes": true, 00:13:20.981 "zcopy": false, 00:13:20.981 "get_zone_info": false, 00:13:20.981 "zone_management": false, 00:13:20.981 "zone_append": false, 00:13:20.981 "compare": false, 00:13:20.981 "compare_and_write": false, 00:13:20.981 "abort": false, 00:13:20.981 "seek_hole": false, 00:13:20.981 "seek_data": false, 00:13:20.981 "copy": false, 00:13:20.981 "nvme_iov_md": false 00:13:20.981 }, 00:13:20.981 "memory_domains": [ 00:13:20.981 { 00:13:20.981 "dma_device_id": "system", 00:13:20.981 "dma_device_type": 1 00:13:20.981 }, 00:13:20.981 { 00:13:20.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.981 "dma_device_type": 2 00:13:20.981 }, 00:13:20.981 { 00:13:20.981 "dma_device_id": "system", 00:13:20.981 "dma_device_type": 1 00:13:20.981 }, 00:13:20.981 { 00:13:20.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.981 "dma_device_type": 2 00:13:20.981 }, 00:13:20.981 { 00:13:20.981 "dma_device_id": "system", 00:13:20.981 "dma_device_type": 1 00:13:20.981 }, 00:13:20.981 { 00:13:20.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.981 "dma_device_type": 2 00:13:20.981 } 00:13:20.981 ], 00:13:20.981 "driver_specific": { 00:13:20.981 "raid": { 00:13:20.981 "uuid": "33484285-530f-4a4b-9bc8-c63f59058bbb", 00:13:20.981 "strip_size_kb": 64, 00:13:20.981 "state": "online", 00:13:20.981 "raid_level": "raid0", 00:13:20.981 "superblock": true, 00:13:20.981 "num_base_bdevs": 3, 00:13:20.981 "num_base_bdevs_discovered": 3, 00:13:20.981 "num_base_bdevs_operational": 3, 00:13:20.981 "base_bdevs_list": [ 00:13:20.981 { 00:13:20.981 "name": "NewBaseBdev", 00:13:20.981 "uuid": "8c651837-f6d5-49f6-ac77-223400f94dce", 00:13:20.981 "is_configured": true, 00:13:20.981 "data_offset": 2048, 00:13:20.981 "data_size": 63488 00:13:20.981 }, 00:13:20.981 { 00:13:20.981 "name": "BaseBdev2", 00:13:20.981 "uuid": "7f02d366-500a-4f8f-8c0a-de4bdbb909ab", 00:13:20.981 "is_configured": true, 00:13:20.981 "data_offset": 2048, 00:13:20.981 "data_size": 63488 00:13:20.981 }, 00:13:20.981 { 00:13:20.981 "name": "BaseBdev3", 00:13:20.981 "uuid": "0803d2d4-1953-4a25-a566-deaed5609e1d", 00:13:20.981 "is_configured": true, 00:13:20.981 "data_offset": 2048, 00:13:20.981 "data_size": 63488 00:13:20.981 } 00:13:20.981 ] 00:13:20.981 } 00:13:20.981 } 00:13:20.981 }' 00:13:20.981 06:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:20.981 06:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:20.981 BaseBdev2 00:13:20.981 BaseBdev3' 00:13:20.981 06:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.981 06:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:20.981 06:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:20.981 06:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:20.981 06:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.981 06:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.981 06:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.981 06:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.981 06:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:20.981 06:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:20.981 06:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:20.981 06:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:20.981 06:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.981 06:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.981 06:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.981 06:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.981 06:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:20.981 06:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:20.981 06:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:20.981 06:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:20.981 06:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.981 06:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.981 06:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.266 06:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.266 06:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:21.266 06:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:21.266 06:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:21.266 06:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.266 06:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.266 [2024-12-06 06:39:39.671411] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:21.266 [2024-12-06 06:39:39.671446] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:21.266 [2024-12-06 06:39:39.671601] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:21.266 [2024-12-06 06:39:39.671676] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:21.266 [2024-12-06 06:39:39.671696] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:21.266 06:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.266 06:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64638 00:13:21.266 06:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64638 ']' 00:13:21.266 06:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64638 00:13:21.266 06:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:21.266 06:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:21.266 06:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64638 00:13:21.266 06:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:21.266 06:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:21.266 killing process with pid 64638 00:13:21.266 06:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64638' 00:13:21.266 06:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64638 00:13:21.266 [2024-12-06 06:39:39.709429] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:21.266 06:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64638 00:13:21.526 [2024-12-06 06:39:39.985948] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:22.463 06:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:22.463 00:13:22.463 real 0m11.758s 00:13:22.463 user 0m19.444s 00:13:22.463 sys 0m1.657s 00:13:22.463 06:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.463 06:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.463 ************************************ 00:13:22.463 END TEST raid_state_function_test_sb 00:13:22.463 ************************************ 00:13:22.463 06:39:41 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:13:22.463 06:39:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:22.463 06:39:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.463 06:39:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:22.463 ************************************ 00:13:22.463 START TEST raid_superblock_test 00:13:22.463 ************************************ 00:13:22.463 06:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:13:22.464 06:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:13:22.464 06:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:22.464 06:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:22.464 06:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:22.464 06:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:22.464 06:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:22.464 06:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:22.464 06:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:22.464 06:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:22.464 06:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:22.464 06:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:22.464 06:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:22.464 06:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:22.464 06:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:13:22.464 06:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:22.464 06:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:22.464 06:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65271 00:13:22.464 06:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65271 00:13:22.464 06:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65271 ']' 00:13:22.464 06:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:22.464 06:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.464 06:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.464 06:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.464 06:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.464 06:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.724 [2024-12-06 06:39:41.193938] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:13:22.724 [2024-12-06 06:39:41.194115] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65271 ] 00:13:22.724 [2024-12-06 06:39:41.367708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.983 [2024-12-06 06:39:41.501076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.242 [2024-12-06 06:39:41.711134] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:23.242 [2024-12-06 06:39:41.711204] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:23.810 06:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:23.810 06:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.811 malloc1 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.811 [2024-12-06 06:39:42.239338] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:23.811 [2024-12-06 06:39:42.239409] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.811 [2024-12-06 06:39:42.239443] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:23.811 [2024-12-06 06:39:42.239460] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.811 [2024-12-06 06:39:42.242572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.811 [2024-12-06 06:39:42.242617] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:23.811 pt1 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.811 malloc2 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.811 [2024-12-06 06:39:42.296150] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:23.811 [2024-12-06 06:39:42.296217] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.811 [2024-12-06 06:39:42.296257] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:23.811 [2024-12-06 06:39:42.296273] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.811 [2024-12-06 06:39:42.299108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.811 [2024-12-06 06:39:42.299146] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:23.811 pt2 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.811 malloc3 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.811 [2024-12-06 06:39:42.365213] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:23.811 [2024-12-06 06:39:42.365277] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.811 [2024-12-06 06:39:42.365311] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:23.811 [2024-12-06 06:39:42.365326] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.811 [2024-12-06 06:39:42.368141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.811 [2024-12-06 06:39:42.368181] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:23.811 pt3 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.811 [2024-12-06 06:39:42.377283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:23.811 [2024-12-06 06:39:42.379735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:23.811 [2024-12-06 06:39:42.379843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:23.811 [2024-12-06 06:39:42.380064] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:23.811 [2024-12-06 06:39:42.380096] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:23.811 [2024-12-06 06:39:42.380464] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:23.811 [2024-12-06 06:39:42.380708] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:23.811 [2024-12-06 06:39:42.380735] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:23.811 [2024-12-06 06:39:42.380950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:23.811 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.812 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.812 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:23.812 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.812 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:23.812 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.812 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.812 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.812 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.812 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.812 06:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.812 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.812 06:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.812 06:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.812 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.812 "name": "raid_bdev1", 00:13:23.812 "uuid": "b89db6cc-0dc5-45e8-b987-0725a794b730", 00:13:23.812 "strip_size_kb": 64, 00:13:23.812 "state": "online", 00:13:23.812 "raid_level": "raid0", 00:13:23.812 "superblock": true, 00:13:23.812 "num_base_bdevs": 3, 00:13:23.812 "num_base_bdevs_discovered": 3, 00:13:23.812 "num_base_bdevs_operational": 3, 00:13:23.812 "base_bdevs_list": [ 00:13:23.812 { 00:13:23.812 "name": "pt1", 00:13:23.812 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:23.812 "is_configured": true, 00:13:23.812 "data_offset": 2048, 00:13:23.812 "data_size": 63488 00:13:23.812 }, 00:13:23.812 { 00:13:23.812 "name": "pt2", 00:13:23.812 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:23.812 "is_configured": true, 00:13:23.812 "data_offset": 2048, 00:13:23.812 "data_size": 63488 00:13:23.812 }, 00:13:23.812 { 00:13:23.812 "name": "pt3", 00:13:23.812 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:23.812 "is_configured": true, 00:13:23.812 "data_offset": 2048, 00:13:23.812 "data_size": 63488 00:13:23.812 } 00:13:23.812 ] 00:13:23.812 }' 00:13:23.812 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.812 06:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.390 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:24.391 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:24.391 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:24.391 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:24.391 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:24.391 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:24.391 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:24.391 06:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.391 06:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.391 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:24.391 [2024-12-06 06:39:42.933816] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:24.391 06:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.391 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:24.391 "name": "raid_bdev1", 00:13:24.391 "aliases": [ 00:13:24.391 "b89db6cc-0dc5-45e8-b987-0725a794b730" 00:13:24.391 ], 00:13:24.391 "product_name": "Raid Volume", 00:13:24.391 "block_size": 512, 00:13:24.391 "num_blocks": 190464, 00:13:24.391 "uuid": "b89db6cc-0dc5-45e8-b987-0725a794b730", 00:13:24.391 "assigned_rate_limits": { 00:13:24.391 "rw_ios_per_sec": 0, 00:13:24.391 "rw_mbytes_per_sec": 0, 00:13:24.391 "r_mbytes_per_sec": 0, 00:13:24.391 "w_mbytes_per_sec": 0 00:13:24.391 }, 00:13:24.391 "claimed": false, 00:13:24.391 "zoned": false, 00:13:24.391 "supported_io_types": { 00:13:24.391 "read": true, 00:13:24.391 "write": true, 00:13:24.391 "unmap": true, 00:13:24.391 "flush": true, 00:13:24.391 "reset": true, 00:13:24.391 "nvme_admin": false, 00:13:24.391 "nvme_io": false, 00:13:24.391 "nvme_io_md": false, 00:13:24.391 "write_zeroes": true, 00:13:24.391 "zcopy": false, 00:13:24.391 "get_zone_info": false, 00:13:24.391 "zone_management": false, 00:13:24.391 "zone_append": false, 00:13:24.391 "compare": false, 00:13:24.391 "compare_and_write": false, 00:13:24.391 "abort": false, 00:13:24.391 "seek_hole": false, 00:13:24.391 "seek_data": false, 00:13:24.391 "copy": false, 00:13:24.391 "nvme_iov_md": false 00:13:24.391 }, 00:13:24.391 "memory_domains": [ 00:13:24.391 { 00:13:24.391 "dma_device_id": "system", 00:13:24.391 "dma_device_type": 1 00:13:24.391 }, 00:13:24.391 { 00:13:24.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.391 "dma_device_type": 2 00:13:24.391 }, 00:13:24.391 { 00:13:24.391 "dma_device_id": "system", 00:13:24.391 "dma_device_type": 1 00:13:24.391 }, 00:13:24.391 { 00:13:24.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.391 "dma_device_type": 2 00:13:24.391 }, 00:13:24.391 { 00:13:24.391 "dma_device_id": "system", 00:13:24.391 "dma_device_type": 1 00:13:24.391 }, 00:13:24.391 { 00:13:24.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.391 "dma_device_type": 2 00:13:24.391 } 00:13:24.391 ], 00:13:24.391 "driver_specific": { 00:13:24.391 "raid": { 00:13:24.391 "uuid": "b89db6cc-0dc5-45e8-b987-0725a794b730", 00:13:24.391 "strip_size_kb": 64, 00:13:24.391 "state": "online", 00:13:24.391 "raid_level": "raid0", 00:13:24.391 "superblock": true, 00:13:24.391 "num_base_bdevs": 3, 00:13:24.391 "num_base_bdevs_discovered": 3, 00:13:24.391 "num_base_bdevs_operational": 3, 00:13:24.391 "base_bdevs_list": [ 00:13:24.391 { 00:13:24.391 "name": "pt1", 00:13:24.391 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:24.391 "is_configured": true, 00:13:24.391 "data_offset": 2048, 00:13:24.391 "data_size": 63488 00:13:24.391 }, 00:13:24.391 { 00:13:24.391 "name": "pt2", 00:13:24.391 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:24.391 "is_configured": true, 00:13:24.391 "data_offset": 2048, 00:13:24.391 "data_size": 63488 00:13:24.391 }, 00:13:24.391 { 00:13:24.391 "name": "pt3", 00:13:24.391 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:24.391 "is_configured": true, 00:13:24.391 "data_offset": 2048, 00:13:24.391 "data_size": 63488 00:13:24.391 } 00:13:24.391 ] 00:13:24.391 } 00:13:24.391 } 00:13:24.391 }' 00:13:24.391 06:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:24.650 pt2 00:13:24.650 pt3' 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.650 [2024-12-06 06:39:43.257843] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:24.650 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b89db6cc-0dc5-45e8-b987-0725a794b730 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b89db6cc-0dc5-45e8-b987-0725a794b730 ']' 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.910 [2024-12-06 06:39:43.301470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:24.910 [2024-12-06 06:39:43.301512] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:24.910 [2024-12-06 06:39:43.301634] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:24.910 [2024-12-06 06:39:43.301725] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:24.910 [2024-12-06 06:39:43.301750] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.910 [2024-12-06 06:39:43.457592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:24.910 [2024-12-06 06:39:43.460061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:24.910 [2024-12-06 06:39:43.460138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:24.910 [2024-12-06 06:39:43.460217] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:24.910 [2024-12-06 06:39:43.460289] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:24.910 [2024-12-06 06:39:43.460331] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:24.910 [2024-12-06 06:39:43.460359] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:24.910 [2024-12-06 06:39:43.460375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:24.910 request: 00:13:24.910 { 00:13:24.910 "name": "raid_bdev1", 00:13:24.910 "raid_level": "raid0", 00:13:24.910 "base_bdevs": [ 00:13:24.910 "malloc1", 00:13:24.910 "malloc2", 00:13:24.910 "malloc3" 00:13:24.910 ], 00:13:24.910 "strip_size_kb": 64, 00:13:24.910 "superblock": false, 00:13:24.910 "method": "bdev_raid_create", 00:13:24.910 "req_id": 1 00:13:24.910 } 00:13:24.910 Got JSON-RPC error response 00:13:24.910 response: 00:13:24.910 { 00:13:24.910 "code": -17, 00:13:24.910 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:24.910 } 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.910 [2024-12-06 06:39:43.517540] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:24.910 [2024-12-06 06:39:43.517609] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.910 [2024-12-06 06:39:43.517642] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:24.910 [2024-12-06 06:39:43.517657] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.910 [2024-12-06 06:39:43.520554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.910 [2024-12-06 06:39:43.520592] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:24.910 [2024-12-06 06:39:43.520713] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:24.910 [2024-12-06 06:39:43.520782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:24.910 pt1 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.910 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.911 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.911 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.911 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.911 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.911 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.169 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.169 "name": "raid_bdev1", 00:13:25.169 "uuid": "b89db6cc-0dc5-45e8-b987-0725a794b730", 00:13:25.169 "strip_size_kb": 64, 00:13:25.169 "state": "configuring", 00:13:25.169 "raid_level": "raid0", 00:13:25.169 "superblock": true, 00:13:25.169 "num_base_bdevs": 3, 00:13:25.169 "num_base_bdevs_discovered": 1, 00:13:25.169 "num_base_bdevs_operational": 3, 00:13:25.169 "base_bdevs_list": [ 00:13:25.169 { 00:13:25.169 "name": "pt1", 00:13:25.169 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:25.169 "is_configured": true, 00:13:25.169 "data_offset": 2048, 00:13:25.169 "data_size": 63488 00:13:25.169 }, 00:13:25.169 { 00:13:25.169 "name": null, 00:13:25.169 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:25.169 "is_configured": false, 00:13:25.169 "data_offset": 2048, 00:13:25.169 "data_size": 63488 00:13:25.169 }, 00:13:25.169 { 00:13:25.169 "name": null, 00:13:25.169 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:25.169 "is_configured": false, 00:13:25.169 "data_offset": 2048, 00:13:25.169 "data_size": 63488 00:13:25.169 } 00:13:25.169 ] 00:13:25.169 }' 00:13:25.169 06:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.169 06:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.459 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:25.459 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:25.459 06:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.459 06:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.459 [2024-12-06 06:39:44.041682] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:25.459 [2024-12-06 06:39:44.041766] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.459 [2024-12-06 06:39:44.041807] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:25.459 [2024-12-06 06:39:44.041823] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.459 [2024-12-06 06:39:44.042376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.459 [2024-12-06 06:39:44.042408] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:25.459 [2024-12-06 06:39:44.042514] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:25.459 [2024-12-06 06:39:44.042573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:25.459 pt2 00:13:25.459 06:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.459 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:25.459 06:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.459 06:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.459 [2024-12-06 06:39:44.049656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:25.459 06:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.459 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:13:25.459 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.459 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:25.459 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:25.459 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.459 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:25.459 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.459 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.459 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.459 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.459 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.459 06:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.459 06:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.459 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.459 06:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.717 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.717 "name": "raid_bdev1", 00:13:25.717 "uuid": "b89db6cc-0dc5-45e8-b987-0725a794b730", 00:13:25.717 "strip_size_kb": 64, 00:13:25.717 "state": "configuring", 00:13:25.717 "raid_level": "raid0", 00:13:25.717 "superblock": true, 00:13:25.717 "num_base_bdevs": 3, 00:13:25.717 "num_base_bdevs_discovered": 1, 00:13:25.717 "num_base_bdevs_operational": 3, 00:13:25.717 "base_bdevs_list": [ 00:13:25.717 { 00:13:25.717 "name": "pt1", 00:13:25.717 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:25.717 "is_configured": true, 00:13:25.717 "data_offset": 2048, 00:13:25.717 "data_size": 63488 00:13:25.717 }, 00:13:25.717 { 00:13:25.717 "name": null, 00:13:25.717 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:25.717 "is_configured": false, 00:13:25.717 "data_offset": 0, 00:13:25.717 "data_size": 63488 00:13:25.717 }, 00:13:25.717 { 00:13:25.717 "name": null, 00:13:25.717 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:25.717 "is_configured": false, 00:13:25.717 "data_offset": 2048, 00:13:25.717 "data_size": 63488 00:13:25.717 } 00:13:25.717 ] 00:13:25.717 }' 00:13:25.717 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.717 06:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.976 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:25.976 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:25.976 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:25.976 06:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.976 06:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.976 [2024-12-06 06:39:44.577789] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:25.976 [2024-12-06 06:39:44.577869] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.976 [2024-12-06 06:39:44.577897] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:25.976 [2024-12-06 06:39:44.577916] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.976 [2024-12-06 06:39:44.578489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.976 [2024-12-06 06:39:44.578545] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:25.976 [2024-12-06 06:39:44.578647] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:25.976 [2024-12-06 06:39:44.578683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:25.976 pt2 00:13:25.976 06:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.976 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:25.976 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:25.976 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:25.976 06:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.976 06:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.976 [2024-12-06 06:39:44.585756] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:25.976 [2024-12-06 06:39:44.585809] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.976 [2024-12-06 06:39:44.585831] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:25.976 [2024-12-06 06:39:44.585847] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.976 [2024-12-06 06:39:44.586291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.976 [2024-12-06 06:39:44.586332] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:25.976 [2024-12-06 06:39:44.586407] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:25.976 [2024-12-06 06:39:44.586440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:25.976 [2024-12-06 06:39:44.586611] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:25.976 [2024-12-06 06:39:44.586637] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:25.976 [2024-12-06 06:39:44.586956] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:25.976 [2024-12-06 06:39:44.587158] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:25.976 [2024-12-06 06:39:44.587183] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:25.976 [2024-12-06 06:39:44.587363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.976 pt3 00:13:25.976 06:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.976 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:25.976 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:25.976 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:25.976 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.976 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.976 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:25.976 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.976 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:25.976 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.976 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.976 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.976 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.976 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.976 06:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.976 06:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.976 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.976 06:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.236 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.236 "name": "raid_bdev1", 00:13:26.236 "uuid": "b89db6cc-0dc5-45e8-b987-0725a794b730", 00:13:26.236 "strip_size_kb": 64, 00:13:26.236 "state": "online", 00:13:26.236 "raid_level": "raid0", 00:13:26.236 "superblock": true, 00:13:26.236 "num_base_bdevs": 3, 00:13:26.236 "num_base_bdevs_discovered": 3, 00:13:26.236 "num_base_bdevs_operational": 3, 00:13:26.236 "base_bdevs_list": [ 00:13:26.236 { 00:13:26.236 "name": "pt1", 00:13:26.236 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:26.236 "is_configured": true, 00:13:26.236 "data_offset": 2048, 00:13:26.236 "data_size": 63488 00:13:26.236 }, 00:13:26.236 { 00:13:26.236 "name": "pt2", 00:13:26.236 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:26.236 "is_configured": true, 00:13:26.236 "data_offset": 2048, 00:13:26.236 "data_size": 63488 00:13:26.236 }, 00:13:26.236 { 00:13:26.236 "name": "pt3", 00:13:26.236 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:26.236 "is_configured": true, 00:13:26.236 "data_offset": 2048, 00:13:26.236 "data_size": 63488 00:13:26.236 } 00:13:26.236 ] 00:13:26.236 }' 00:13:26.236 06:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.236 06:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.804 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:26.804 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:26.804 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:26.804 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:26.804 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:26.804 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:26.804 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:26.804 06:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.804 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:26.804 06:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.804 [2024-12-06 06:39:45.206349] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:26.804 06:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.804 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:26.804 "name": "raid_bdev1", 00:13:26.804 "aliases": [ 00:13:26.804 "b89db6cc-0dc5-45e8-b987-0725a794b730" 00:13:26.804 ], 00:13:26.804 "product_name": "Raid Volume", 00:13:26.804 "block_size": 512, 00:13:26.804 "num_blocks": 190464, 00:13:26.804 "uuid": "b89db6cc-0dc5-45e8-b987-0725a794b730", 00:13:26.804 "assigned_rate_limits": { 00:13:26.804 "rw_ios_per_sec": 0, 00:13:26.804 "rw_mbytes_per_sec": 0, 00:13:26.804 "r_mbytes_per_sec": 0, 00:13:26.804 "w_mbytes_per_sec": 0 00:13:26.804 }, 00:13:26.804 "claimed": false, 00:13:26.804 "zoned": false, 00:13:26.804 "supported_io_types": { 00:13:26.804 "read": true, 00:13:26.804 "write": true, 00:13:26.804 "unmap": true, 00:13:26.804 "flush": true, 00:13:26.804 "reset": true, 00:13:26.804 "nvme_admin": false, 00:13:26.804 "nvme_io": false, 00:13:26.804 "nvme_io_md": false, 00:13:26.804 "write_zeroes": true, 00:13:26.804 "zcopy": false, 00:13:26.804 "get_zone_info": false, 00:13:26.804 "zone_management": false, 00:13:26.804 "zone_append": false, 00:13:26.804 "compare": false, 00:13:26.804 "compare_and_write": false, 00:13:26.804 "abort": false, 00:13:26.804 "seek_hole": false, 00:13:26.804 "seek_data": false, 00:13:26.804 "copy": false, 00:13:26.804 "nvme_iov_md": false 00:13:26.804 }, 00:13:26.804 "memory_domains": [ 00:13:26.804 { 00:13:26.804 "dma_device_id": "system", 00:13:26.804 "dma_device_type": 1 00:13:26.804 }, 00:13:26.804 { 00:13:26.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.804 "dma_device_type": 2 00:13:26.804 }, 00:13:26.804 { 00:13:26.804 "dma_device_id": "system", 00:13:26.804 "dma_device_type": 1 00:13:26.804 }, 00:13:26.804 { 00:13:26.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.804 "dma_device_type": 2 00:13:26.804 }, 00:13:26.804 { 00:13:26.804 "dma_device_id": "system", 00:13:26.804 "dma_device_type": 1 00:13:26.804 }, 00:13:26.804 { 00:13:26.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.804 "dma_device_type": 2 00:13:26.804 } 00:13:26.804 ], 00:13:26.804 "driver_specific": { 00:13:26.804 "raid": { 00:13:26.804 "uuid": "b89db6cc-0dc5-45e8-b987-0725a794b730", 00:13:26.804 "strip_size_kb": 64, 00:13:26.804 "state": "online", 00:13:26.804 "raid_level": "raid0", 00:13:26.804 "superblock": true, 00:13:26.804 "num_base_bdevs": 3, 00:13:26.804 "num_base_bdevs_discovered": 3, 00:13:26.804 "num_base_bdevs_operational": 3, 00:13:26.804 "base_bdevs_list": [ 00:13:26.804 { 00:13:26.804 "name": "pt1", 00:13:26.804 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:26.804 "is_configured": true, 00:13:26.804 "data_offset": 2048, 00:13:26.804 "data_size": 63488 00:13:26.804 }, 00:13:26.804 { 00:13:26.804 "name": "pt2", 00:13:26.804 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:26.804 "is_configured": true, 00:13:26.804 "data_offset": 2048, 00:13:26.804 "data_size": 63488 00:13:26.804 }, 00:13:26.804 { 00:13:26.804 "name": "pt3", 00:13:26.804 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:26.804 "is_configured": true, 00:13:26.804 "data_offset": 2048, 00:13:26.804 "data_size": 63488 00:13:26.804 } 00:13:26.804 ] 00:13:26.804 } 00:13:26.804 } 00:13:26.804 }' 00:13:26.804 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:26.804 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:26.804 pt2 00:13:26.804 pt3' 00:13:26.804 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:26.804 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:26.804 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:26.804 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:26.804 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:26.804 06:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.804 06:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.804 06:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.804 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:26.804 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:26.804 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:26.804 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:26.804 06:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.804 06:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.804 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:26.804 06:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.063 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:27.063 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:27.063 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:27.063 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:27.063 06:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.063 06:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.063 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.063 06:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.063 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:27.063 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:27.063 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:27.063 06:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.063 06:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.063 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:27.063 [2024-12-06 06:39:45.534461] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:27.063 06:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.063 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b89db6cc-0dc5-45e8-b987-0725a794b730 '!=' b89db6cc-0dc5-45e8-b987-0725a794b730 ']' 00:13:27.063 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:13:27.063 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:27.063 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:27.063 06:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65271 00:13:27.063 06:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65271 ']' 00:13:27.063 06:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65271 00:13:27.063 06:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:27.063 06:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:27.063 06:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65271 00:13:27.063 06:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:27.063 06:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:27.063 06:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65271' 00:13:27.063 killing process with pid 65271 00:13:27.063 06:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65271 00:13:27.063 [2024-12-06 06:39:45.605003] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:27.063 06:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65271 00:13:27.063 [2024-12-06 06:39:45.605131] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:27.064 [2024-12-06 06:39:45.605234] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:27.064 [2024-12-06 06:39:45.605256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:27.322 [2024-12-06 06:39:45.885205] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:28.700 06:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:28.700 00:13:28.700 real 0m5.859s 00:13:28.700 user 0m8.833s 00:13:28.700 sys 0m0.870s 00:13:28.700 06:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:28.700 06:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.700 ************************************ 00:13:28.700 END TEST raid_superblock_test 00:13:28.700 ************************************ 00:13:28.700 06:39:47 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:13:28.700 06:39:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:28.700 06:39:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:28.700 06:39:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:28.700 ************************************ 00:13:28.700 START TEST raid_read_error_test 00:13:28.700 ************************************ 00:13:28.700 06:39:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:13:28.700 06:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:28.700 06:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.t3NuK7jb3M 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65529 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65529 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65529 ']' 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:28.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:28.701 06:39:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.701 [2024-12-06 06:39:47.116182] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:13:28.701 [2024-12-06 06:39:47.116328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65529 ] 00:13:28.701 [2024-12-06 06:39:47.290492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.960 [2024-12-06 06:39:47.447173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.218 [2024-12-06 06:39:47.678297] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.218 [2024-12-06 06:39:47.678388] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.784 BaseBdev1_malloc 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.784 true 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.784 [2024-12-06 06:39:48.202084] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:29.784 [2024-12-06 06:39:48.202159] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.784 [2024-12-06 06:39:48.202187] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:29.784 [2024-12-06 06:39:48.202205] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.784 [2024-12-06 06:39:48.205055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.784 [2024-12-06 06:39:48.205115] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:29.784 BaseBdev1 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.784 BaseBdev2_malloc 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.784 true 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.784 [2024-12-06 06:39:48.261764] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:29.784 [2024-12-06 06:39:48.261828] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.784 [2024-12-06 06:39:48.261853] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:29.784 [2024-12-06 06:39:48.261870] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.784 [2024-12-06 06:39:48.264624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.784 [2024-12-06 06:39:48.264670] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:29.784 BaseBdev2 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.784 BaseBdev3_malloc 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.784 true 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.784 [2024-12-06 06:39:48.333709] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:29.784 [2024-12-06 06:39:48.333772] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.784 [2024-12-06 06:39:48.333798] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:29.784 [2024-12-06 06:39:48.333816] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.784 [2024-12-06 06:39:48.336692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.784 [2024-12-06 06:39:48.336739] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:29.784 BaseBdev3 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.784 [2024-12-06 06:39:48.341806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:29.784 [2024-12-06 06:39:48.344256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:29.784 [2024-12-06 06:39:48.344367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:29.784 [2024-12-06 06:39:48.344661] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:29.784 [2024-12-06 06:39:48.344691] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:29.784 [2024-12-06 06:39:48.345005] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:29.784 [2024-12-06 06:39:48.345252] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:29.784 [2024-12-06 06:39:48.345284] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:29.784 [2024-12-06 06:39:48.345478] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.784 "name": "raid_bdev1", 00:13:29.784 "uuid": "8c3498b8-d9d4-4908-82ba-d6b0005d67a2", 00:13:29.784 "strip_size_kb": 64, 00:13:29.784 "state": "online", 00:13:29.784 "raid_level": "raid0", 00:13:29.784 "superblock": true, 00:13:29.784 "num_base_bdevs": 3, 00:13:29.784 "num_base_bdevs_discovered": 3, 00:13:29.784 "num_base_bdevs_operational": 3, 00:13:29.784 "base_bdevs_list": [ 00:13:29.784 { 00:13:29.784 "name": "BaseBdev1", 00:13:29.784 "uuid": "bd3d46a3-d553-5082-9320-a199ed7da1e5", 00:13:29.784 "is_configured": true, 00:13:29.784 "data_offset": 2048, 00:13:29.784 "data_size": 63488 00:13:29.784 }, 00:13:29.784 { 00:13:29.784 "name": "BaseBdev2", 00:13:29.784 "uuid": "19be73da-142d-537d-a253-cd16e670de2c", 00:13:29.784 "is_configured": true, 00:13:29.784 "data_offset": 2048, 00:13:29.784 "data_size": 63488 00:13:29.784 }, 00:13:29.784 { 00:13:29.784 "name": "BaseBdev3", 00:13:29.784 "uuid": "ed15d2b2-83be-5fba-9a03-e8b01d64cbd5", 00:13:29.784 "is_configured": true, 00:13:29.784 "data_offset": 2048, 00:13:29.784 "data_size": 63488 00:13:29.784 } 00:13:29.784 ] 00:13:29.784 }' 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.784 06:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.350 06:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:30.350 06:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:30.607 [2024-12-06 06:39:48.995500] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:31.541 06:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:31.541 06:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.541 06:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.541 06:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.541 06:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:31.541 06:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:31.541 06:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:31.541 06:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:31.541 06:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.541 06:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.541 06:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:31.541 06:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:31.541 06:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:31.541 06:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.541 06:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.541 06:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.541 06:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.541 06:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.541 06:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.541 06:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.541 06:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.541 06:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.541 06:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.541 "name": "raid_bdev1", 00:13:31.541 "uuid": "8c3498b8-d9d4-4908-82ba-d6b0005d67a2", 00:13:31.541 "strip_size_kb": 64, 00:13:31.541 "state": "online", 00:13:31.541 "raid_level": "raid0", 00:13:31.541 "superblock": true, 00:13:31.541 "num_base_bdevs": 3, 00:13:31.541 "num_base_bdevs_discovered": 3, 00:13:31.541 "num_base_bdevs_operational": 3, 00:13:31.541 "base_bdevs_list": [ 00:13:31.541 { 00:13:31.541 "name": "BaseBdev1", 00:13:31.541 "uuid": "bd3d46a3-d553-5082-9320-a199ed7da1e5", 00:13:31.541 "is_configured": true, 00:13:31.541 "data_offset": 2048, 00:13:31.541 "data_size": 63488 00:13:31.541 }, 00:13:31.541 { 00:13:31.541 "name": "BaseBdev2", 00:13:31.541 "uuid": "19be73da-142d-537d-a253-cd16e670de2c", 00:13:31.541 "is_configured": true, 00:13:31.541 "data_offset": 2048, 00:13:31.541 "data_size": 63488 00:13:31.541 }, 00:13:31.541 { 00:13:31.541 "name": "BaseBdev3", 00:13:31.541 "uuid": "ed15d2b2-83be-5fba-9a03-e8b01d64cbd5", 00:13:31.541 "is_configured": true, 00:13:31.541 "data_offset": 2048, 00:13:31.541 "data_size": 63488 00:13:31.541 } 00:13:31.541 ] 00:13:31.541 }' 00:13:31.541 06:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.541 06:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.800 06:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:31.801 06:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.801 06:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.801 [2024-12-06 06:39:50.374977] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:31.801 [2024-12-06 06:39:50.375026] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:31.801 [2024-12-06 06:39:50.378575] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.801 [2024-12-06 06:39:50.378645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.801 [2024-12-06 06:39:50.378701] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:31.801 [2024-12-06 06:39:50.378716] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:31.801 { 00:13:31.801 "results": [ 00:13:31.801 { 00:13:31.801 "job": "raid_bdev1", 00:13:31.801 "core_mask": "0x1", 00:13:31.801 "workload": "randrw", 00:13:31.801 "percentage": 50, 00:13:31.801 "status": "finished", 00:13:31.801 "queue_depth": 1, 00:13:31.801 "io_size": 131072, 00:13:31.801 "runtime": 1.376959, 00:13:31.801 "iops": 10353.975681193122, 00:13:31.801 "mibps": 1294.2469601491402, 00:13:31.801 "io_failed": 1, 00:13:31.801 "io_timeout": 0, 00:13:31.801 "avg_latency_us": 134.35660643466508, 00:13:31.801 "min_latency_us": 40.72727272727273, 00:13:31.801 "max_latency_us": 1824.581818181818 00:13:31.801 } 00:13:31.801 ], 00:13:31.801 "core_count": 1 00:13:31.801 } 00:13:31.801 06:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.801 06:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65529 00:13:31.801 06:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65529 ']' 00:13:31.801 06:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65529 00:13:31.801 06:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:31.801 06:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:31.801 06:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65529 00:13:31.801 06:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:31.801 06:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:31.801 killing process with pid 65529 00:13:31.801 06:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65529' 00:13:31.801 06:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65529 00:13:31.801 [2024-12-06 06:39:50.414382] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:31.801 06:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65529 00:13:32.060 [2024-12-06 06:39:50.622944] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:33.438 06:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:33.438 06:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.t3NuK7jb3M 00:13:33.438 06:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:33.438 06:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:13:33.438 06:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:33.438 06:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:33.438 06:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:33.438 06:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:13:33.438 00:13:33.438 real 0m4.762s 00:13:33.438 user 0m5.887s 00:13:33.438 sys 0m0.579s 00:13:33.438 06:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.438 06:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.438 ************************************ 00:13:33.438 END TEST raid_read_error_test 00:13:33.438 ************************************ 00:13:33.438 06:39:51 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:13:33.438 06:39:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:33.438 06:39:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.438 06:39:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:33.438 ************************************ 00:13:33.438 START TEST raid_write_error_test 00:13:33.438 ************************************ 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.w3Je4r4qq2 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65675 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65675 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65675 ']' 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:33.438 06:39:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.438 [2024-12-06 06:39:51.949344] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:13:33.438 [2024-12-06 06:39:51.949545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65675 ] 00:13:33.697 [2024-12-06 06:39:52.129422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.697 [2024-12-06 06:39:52.262934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.957 [2024-12-06 06:39:52.470321] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.957 [2024-12-06 06:39:52.470407] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.526 06:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:34.526 06:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:34.526 06:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:34.526 06:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:34.526 06:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.526 06:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.526 BaseBdev1_malloc 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.526 true 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.526 [2024-12-06 06:39:53.017174] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:34.526 [2024-12-06 06:39:53.017238] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.526 [2024-12-06 06:39:53.017267] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:34.526 [2024-12-06 06:39:53.017286] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.526 [2024-12-06 06:39:53.020674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.526 [2024-12-06 06:39:53.020721] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:34.526 BaseBdev1 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.526 BaseBdev2_malloc 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.526 true 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.526 [2024-12-06 06:39:53.073970] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:34.526 [2024-12-06 06:39:53.074061] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.526 [2024-12-06 06:39:53.074085] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:34.526 [2024-12-06 06:39:53.074102] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.526 [2024-12-06 06:39:53.076904] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.526 [2024-12-06 06:39:53.076979] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:34.526 BaseBdev2 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.526 BaseBdev3_malloc 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.526 true 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.526 [2024-12-06 06:39:53.152644] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:34.526 [2024-12-06 06:39:53.152707] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.526 [2024-12-06 06:39:53.152734] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:34.526 [2024-12-06 06:39:53.152751] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.526 [2024-12-06 06:39:53.155623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.526 [2024-12-06 06:39:53.155672] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:34.526 BaseBdev3 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.526 [2024-12-06 06:39:53.160752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:34.526 [2024-12-06 06:39:53.163211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:34.526 [2024-12-06 06:39:53.163319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:34.526 [2024-12-06 06:39:53.163594] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:34.526 [2024-12-06 06:39:53.163624] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:34.526 [2024-12-06 06:39:53.163944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:34.526 [2024-12-06 06:39:53.164164] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:34.526 [2024-12-06 06:39:53.164215] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:34.526 [2024-12-06 06:39:53.164399] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.526 06:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.785 06:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.785 06:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.785 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.785 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.785 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.785 06:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.785 "name": "raid_bdev1", 00:13:34.785 "uuid": "d945d543-9d99-4a2f-b568-8558ae84a634", 00:13:34.785 "strip_size_kb": 64, 00:13:34.785 "state": "online", 00:13:34.785 "raid_level": "raid0", 00:13:34.785 "superblock": true, 00:13:34.785 "num_base_bdevs": 3, 00:13:34.785 "num_base_bdevs_discovered": 3, 00:13:34.785 "num_base_bdevs_operational": 3, 00:13:34.785 "base_bdevs_list": [ 00:13:34.785 { 00:13:34.785 "name": "BaseBdev1", 00:13:34.785 "uuid": "4584fab0-4749-5368-b103-0a1879d49b55", 00:13:34.785 "is_configured": true, 00:13:34.785 "data_offset": 2048, 00:13:34.785 "data_size": 63488 00:13:34.785 }, 00:13:34.785 { 00:13:34.785 "name": "BaseBdev2", 00:13:34.785 "uuid": "7b5b8a96-2bc3-5e01-9996-ea07d09ab6f4", 00:13:34.785 "is_configured": true, 00:13:34.785 "data_offset": 2048, 00:13:34.785 "data_size": 63488 00:13:34.785 }, 00:13:34.785 { 00:13:34.785 "name": "BaseBdev3", 00:13:34.785 "uuid": "c85f8cd4-134c-5aa6-b521-07ca3295c10a", 00:13:34.785 "is_configured": true, 00:13:34.785 "data_offset": 2048, 00:13:34.785 "data_size": 63488 00:13:34.785 } 00:13:34.785 ] 00:13:34.785 }' 00:13:34.785 06:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.785 06:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.045 06:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:35.045 06:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:35.305 [2024-12-06 06:39:53.766364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:36.242 06:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:36.242 06:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.242 06:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.242 06:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.242 06:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:36.242 06:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:36.242 06:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:36.242 06:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:36.242 06:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.242 06:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.242 06:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:36.242 06:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.242 06:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:36.242 06:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.242 06:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.242 06:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.242 06:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.242 06:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.242 06:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.242 06:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.242 06:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.242 06:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.242 06:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.242 "name": "raid_bdev1", 00:13:36.242 "uuid": "d945d543-9d99-4a2f-b568-8558ae84a634", 00:13:36.242 "strip_size_kb": 64, 00:13:36.242 "state": "online", 00:13:36.242 "raid_level": "raid0", 00:13:36.242 "superblock": true, 00:13:36.242 "num_base_bdevs": 3, 00:13:36.242 "num_base_bdevs_discovered": 3, 00:13:36.242 "num_base_bdevs_operational": 3, 00:13:36.242 "base_bdevs_list": [ 00:13:36.242 { 00:13:36.242 "name": "BaseBdev1", 00:13:36.242 "uuid": "4584fab0-4749-5368-b103-0a1879d49b55", 00:13:36.242 "is_configured": true, 00:13:36.242 "data_offset": 2048, 00:13:36.243 "data_size": 63488 00:13:36.243 }, 00:13:36.243 { 00:13:36.243 "name": "BaseBdev2", 00:13:36.243 "uuid": "7b5b8a96-2bc3-5e01-9996-ea07d09ab6f4", 00:13:36.243 "is_configured": true, 00:13:36.243 "data_offset": 2048, 00:13:36.243 "data_size": 63488 00:13:36.243 }, 00:13:36.243 { 00:13:36.243 "name": "BaseBdev3", 00:13:36.243 "uuid": "c85f8cd4-134c-5aa6-b521-07ca3295c10a", 00:13:36.243 "is_configured": true, 00:13:36.243 "data_offset": 2048, 00:13:36.243 "data_size": 63488 00:13:36.243 } 00:13:36.243 ] 00:13:36.243 }' 00:13:36.243 06:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.243 06:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.810 06:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:36.810 06:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.810 06:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.810 [2024-12-06 06:39:55.190089] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:36.810 [2024-12-06 06:39:55.190125] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:36.810 [2024-12-06 06:39:55.193670] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:36.810 [2024-12-06 06:39:55.193728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.810 [2024-12-06 06:39:55.193783] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:36.810 [2024-12-06 06:39:55.193799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:36.810 { 00:13:36.810 "results": [ 00:13:36.810 { 00:13:36.810 "job": "raid_bdev1", 00:13:36.810 "core_mask": "0x1", 00:13:36.810 "workload": "randrw", 00:13:36.810 "percentage": 50, 00:13:36.810 "status": "finished", 00:13:36.810 "queue_depth": 1, 00:13:36.810 "io_size": 131072, 00:13:36.810 "runtime": 1.421331, 00:13:36.810 "iops": 10470.467470279618, 00:13:36.810 "mibps": 1308.8084337849523, 00:13:36.810 "io_failed": 1, 00:13:36.810 "io_timeout": 0, 00:13:36.810 "avg_latency_us": 133.00833140923444, 00:13:36.810 "min_latency_us": 28.276363636363637, 00:13:36.810 "max_latency_us": 1854.370909090909 00:13:36.810 } 00:13:36.810 ], 00:13:36.810 "core_count": 1 00:13:36.810 } 00:13:36.810 06:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.810 06:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65675 00:13:36.810 06:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65675 ']' 00:13:36.810 06:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65675 00:13:36.810 06:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:36.810 06:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:36.810 06:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65675 00:13:36.810 killing process with pid 65675 00:13:36.810 06:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:36.810 06:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:36.810 06:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65675' 00:13:36.810 06:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65675 00:13:36.810 [2024-12-06 06:39:55.227156] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:36.810 06:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65675 00:13:36.810 [2024-12-06 06:39:55.437642] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:38.199 06:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:38.199 06:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.w3Je4r4qq2 00:13:38.199 06:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:38.199 06:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:13:38.199 06:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:38.199 06:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:38.199 06:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:38.199 06:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:13:38.199 00:13:38.199 real 0m4.725s 00:13:38.199 user 0m5.859s 00:13:38.199 sys 0m0.572s 00:13:38.199 06:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:38.199 06:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.199 ************************************ 00:13:38.199 END TEST raid_write_error_test 00:13:38.199 ************************************ 00:13:38.199 06:39:56 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:38.199 06:39:56 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:13:38.199 06:39:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:38.199 06:39:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:38.199 06:39:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:38.199 ************************************ 00:13:38.199 START TEST raid_state_function_test 00:13:38.199 ************************************ 00:13:38.199 06:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:13:38.199 06:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:38.199 06:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:38.199 06:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:38.199 06:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:38.199 06:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:38.199 06:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:38.199 06:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:38.200 06:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:38.200 06:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:38.200 06:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:38.200 06:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:38.200 06:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:38.200 06:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:38.200 06:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:38.200 06:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:38.200 Process raid pid: 65824 00:13:38.200 06:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:38.200 06:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:38.200 06:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:38.200 06:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:38.200 06:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:38.200 06:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:38.200 06:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:38.200 06:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:38.200 06:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:38.200 06:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:38.200 06:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:38.200 06:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65824 00:13:38.200 06:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65824' 00:13:38.200 06:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65824 00:13:38.200 06:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65824 ']' 00:13:38.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.200 06:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:38.200 06:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.200 06:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:38.200 06:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.200 06:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:38.200 06:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.200 [2024-12-06 06:39:56.729274] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:13:38.200 [2024-12-06 06:39:56.729454] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.459 [2024-12-06 06:39:56.916327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.459 [2024-12-06 06:39:57.052298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.718 [2024-12-06 06:39:57.263906] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.718 [2024-12-06 06:39:57.263957] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:39.287 06:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:39.287 06:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:39.287 06:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:39.287 06:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.287 06:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.287 [2024-12-06 06:39:57.702849] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:39.287 [2024-12-06 06:39:57.702935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:39.287 [2024-12-06 06:39:57.702966] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:39.287 [2024-12-06 06:39:57.702981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:39.287 [2024-12-06 06:39:57.702991] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:39.287 [2024-12-06 06:39:57.703004] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:39.287 06:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.287 06:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:39.287 06:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.287 06:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.287 06:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:39.287 06:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.287 06:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:39.287 06:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.287 06:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.287 06:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.287 06:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.287 06:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.287 06:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.287 06:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.287 06:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.287 06:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.287 06:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.287 "name": "Existed_Raid", 00:13:39.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.287 "strip_size_kb": 64, 00:13:39.287 "state": "configuring", 00:13:39.287 "raid_level": "concat", 00:13:39.287 "superblock": false, 00:13:39.287 "num_base_bdevs": 3, 00:13:39.287 "num_base_bdevs_discovered": 0, 00:13:39.287 "num_base_bdevs_operational": 3, 00:13:39.287 "base_bdevs_list": [ 00:13:39.287 { 00:13:39.287 "name": "BaseBdev1", 00:13:39.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.287 "is_configured": false, 00:13:39.287 "data_offset": 0, 00:13:39.287 "data_size": 0 00:13:39.287 }, 00:13:39.287 { 00:13:39.287 "name": "BaseBdev2", 00:13:39.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.287 "is_configured": false, 00:13:39.287 "data_offset": 0, 00:13:39.287 "data_size": 0 00:13:39.287 }, 00:13:39.287 { 00:13:39.287 "name": "BaseBdev3", 00:13:39.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.287 "is_configured": false, 00:13:39.287 "data_offset": 0, 00:13:39.287 "data_size": 0 00:13:39.287 } 00:13:39.287 ] 00:13:39.287 }' 00:13:39.287 06:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.287 06:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.856 [2024-12-06 06:39:58.238950] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:39.856 [2024-12-06 06:39:58.239166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.856 [2024-12-06 06:39:58.246920] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:39.856 [2024-12-06 06:39:58.246980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:39.856 [2024-12-06 06:39:58.246995] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:39.856 [2024-12-06 06:39:58.247010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:39.856 [2024-12-06 06:39:58.247020] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:39.856 [2024-12-06 06:39:58.247034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.856 [2024-12-06 06:39:58.294329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:39.856 BaseBdev1 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.856 [ 00:13:39.856 { 00:13:39.856 "name": "BaseBdev1", 00:13:39.856 "aliases": [ 00:13:39.856 "87526110-9216-4978-ac77-c9520629141e" 00:13:39.856 ], 00:13:39.856 "product_name": "Malloc disk", 00:13:39.856 "block_size": 512, 00:13:39.856 "num_blocks": 65536, 00:13:39.856 "uuid": "87526110-9216-4978-ac77-c9520629141e", 00:13:39.856 "assigned_rate_limits": { 00:13:39.856 "rw_ios_per_sec": 0, 00:13:39.856 "rw_mbytes_per_sec": 0, 00:13:39.856 "r_mbytes_per_sec": 0, 00:13:39.856 "w_mbytes_per_sec": 0 00:13:39.856 }, 00:13:39.856 "claimed": true, 00:13:39.856 "claim_type": "exclusive_write", 00:13:39.856 "zoned": false, 00:13:39.856 "supported_io_types": { 00:13:39.856 "read": true, 00:13:39.856 "write": true, 00:13:39.856 "unmap": true, 00:13:39.856 "flush": true, 00:13:39.856 "reset": true, 00:13:39.856 "nvme_admin": false, 00:13:39.856 "nvme_io": false, 00:13:39.856 "nvme_io_md": false, 00:13:39.856 "write_zeroes": true, 00:13:39.856 "zcopy": true, 00:13:39.856 "get_zone_info": false, 00:13:39.856 "zone_management": false, 00:13:39.856 "zone_append": false, 00:13:39.856 "compare": false, 00:13:39.856 "compare_and_write": false, 00:13:39.856 "abort": true, 00:13:39.856 "seek_hole": false, 00:13:39.856 "seek_data": false, 00:13:39.856 "copy": true, 00:13:39.856 "nvme_iov_md": false 00:13:39.856 }, 00:13:39.856 "memory_domains": [ 00:13:39.856 { 00:13:39.856 "dma_device_id": "system", 00:13:39.856 "dma_device_type": 1 00:13:39.856 }, 00:13:39.856 { 00:13:39.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.856 "dma_device_type": 2 00:13:39.856 } 00:13:39.856 ], 00:13:39.856 "driver_specific": {} 00:13:39.856 } 00:13:39.856 ] 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.856 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.856 "name": "Existed_Raid", 00:13:39.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.856 "strip_size_kb": 64, 00:13:39.856 "state": "configuring", 00:13:39.856 "raid_level": "concat", 00:13:39.856 "superblock": false, 00:13:39.856 "num_base_bdevs": 3, 00:13:39.856 "num_base_bdevs_discovered": 1, 00:13:39.856 "num_base_bdevs_operational": 3, 00:13:39.856 "base_bdevs_list": [ 00:13:39.856 { 00:13:39.856 "name": "BaseBdev1", 00:13:39.856 "uuid": "87526110-9216-4978-ac77-c9520629141e", 00:13:39.856 "is_configured": true, 00:13:39.856 "data_offset": 0, 00:13:39.856 "data_size": 65536 00:13:39.856 }, 00:13:39.857 { 00:13:39.857 "name": "BaseBdev2", 00:13:39.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.857 "is_configured": false, 00:13:39.857 "data_offset": 0, 00:13:39.857 "data_size": 0 00:13:39.857 }, 00:13:39.857 { 00:13:39.857 "name": "BaseBdev3", 00:13:39.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.857 "is_configured": false, 00:13:39.857 "data_offset": 0, 00:13:39.857 "data_size": 0 00:13:39.857 } 00:13:39.857 ] 00:13:39.857 }' 00:13:39.857 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.857 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.424 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:40.424 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.424 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.424 [2024-12-06 06:39:58.838564] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:40.424 [2024-12-06 06:39:58.838628] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:40.424 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.424 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:40.424 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.424 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.424 [2024-12-06 06:39:58.846629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:40.424 [2024-12-06 06:39:58.848997] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:40.425 [2024-12-06 06:39:58.849049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:40.425 [2024-12-06 06:39:58.849064] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:40.425 [2024-12-06 06:39:58.849080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:40.425 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.425 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:40.425 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:40.425 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:40.425 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.425 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.425 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:40.425 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.425 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.425 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.425 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.425 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.425 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.425 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.425 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.425 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.425 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.425 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.425 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.425 "name": "Existed_Raid", 00:13:40.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.425 "strip_size_kb": 64, 00:13:40.425 "state": "configuring", 00:13:40.425 "raid_level": "concat", 00:13:40.425 "superblock": false, 00:13:40.425 "num_base_bdevs": 3, 00:13:40.425 "num_base_bdevs_discovered": 1, 00:13:40.425 "num_base_bdevs_operational": 3, 00:13:40.425 "base_bdevs_list": [ 00:13:40.425 { 00:13:40.425 "name": "BaseBdev1", 00:13:40.425 "uuid": "87526110-9216-4978-ac77-c9520629141e", 00:13:40.425 "is_configured": true, 00:13:40.425 "data_offset": 0, 00:13:40.425 "data_size": 65536 00:13:40.425 }, 00:13:40.425 { 00:13:40.425 "name": "BaseBdev2", 00:13:40.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.425 "is_configured": false, 00:13:40.425 "data_offset": 0, 00:13:40.425 "data_size": 0 00:13:40.425 }, 00:13:40.425 { 00:13:40.425 "name": "BaseBdev3", 00:13:40.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.425 "is_configured": false, 00:13:40.425 "data_offset": 0, 00:13:40.425 "data_size": 0 00:13:40.425 } 00:13:40.425 ] 00:13:40.425 }' 00:13:40.425 06:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.425 06:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.993 06:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:40.993 06:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.993 06:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.993 [2024-12-06 06:39:59.407337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:40.993 BaseBdev2 00:13:40.993 06:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.993 06:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:40.993 06:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:40.993 06:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:40.993 06:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:40.993 06:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:40.993 06:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:40.993 06:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:40.993 06:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.993 06:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.993 06:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.993 06:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:40.993 06:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.993 06:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.993 [ 00:13:40.993 { 00:13:40.993 "name": "BaseBdev2", 00:13:40.993 "aliases": [ 00:13:40.993 "338af7bd-b6d5-4329-b9bb-65fba82f0825" 00:13:40.993 ], 00:13:40.993 "product_name": "Malloc disk", 00:13:40.993 "block_size": 512, 00:13:40.993 "num_blocks": 65536, 00:13:40.993 "uuid": "338af7bd-b6d5-4329-b9bb-65fba82f0825", 00:13:40.993 "assigned_rate_limits": { 00:13:40.993 "rw_ios_per_sec": 0, 00:13:40.993 "rw_mbytes_per_sec": 0, 00:13:40.993 "r_mbytes_per_sec": 0, 00:13:40.993 "w_mbytes_per_sec": 0 00:13:40.993 }, 00:13:40.993 "claimed": true, 00:13:40.993 "claim_type": "exclusive_write", 00:13:40.993 "zoned": false, 00:13:40.993 "supported_io_types": { 00:13:40.993 "read": true, 00:13:40.993 "write": true, 00:13:40.993 "unmap": true, 00:13:40.993 "flush": true, 00:13:40.993 "reset": true, 00:13:40.993 "nvme_admin": false, 00:13:40.993 "nvme_io": false, 00:13:40.993 "nvme_io_md": false, 00:13:40.993 "write_zeroes": true, 00:13:40.993 "zcopy": true, 00:13:40.993 "get_zone_info": false, 00:13:40.993 "zone_management": false, 00:13:40.993 "zone_append": false, 00:13:40.993 "compare": false, 00:13:40.993 "compare_and_write": false, 00:13:40.993 "abort": true, 00:13:40.993 "seek_hole": false, 00:13:40.993 "seek_data": false, 00:13:40.993 "copy": true, 00:13:40.993 "nvme_iov_md": false 00:13:40.993 }, 00:13:40.993 "memory_domains": [ 00:13:40.993 { 00:13:40.993 "dma_device_id": "system", 00:13:40.993 "dma_device_type": 1 00:13:40.993 }, 00:13:40.993 { 00:13:40.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.993 "dma_device_type": 2 00:13:40.993 } 00:13:40.993 ], 00:13:40.993 "driver_specific": {} 00:13:40.993 } 00:13:40.993 ] 00:13:40.993 06:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.993 06:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:40.993 06:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:40.993 06:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:40.993 06:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:40.993 06:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.993 06:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.994 06:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:40.994 06:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.994 06:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.994 06:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.994 06:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.994 06:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.994 06:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.994 06:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.994 06:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.994 06:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.994 06:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.994 06:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.994 06:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.994 "name": "Existed_Raid", 00:13:40.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.994 "strip_size_kb": 64, 00:13:40.994 "state": "configuring", 00:13:40.994 "raid_level": "concat", 00:13:40.994 "superblock": false, 00:13:40.994 "num_base_bdevs": 3, 00:13:40.994 "num_base_bdevs_discovered": 2, 00:13:40.994 "num_base_bdevs_operational": 3, 00:13:40.994 "base_bdevs_list": [ 00:13:40.994 { 00:13:40.994 "name": "BaseBdev1", 00:13:40.994 "uuid": "87526110-9216-4978-ac77-c9520629141e", 00:13:40.994 "is_configured": true, 00:13:40.994 "data_offset": 0, 00:13:40.994 "data_size": 65536 00:13:40.994 }, 00:13:40.994 { 00:13:40.994 "name": "BaseBdev2", 00:13:40.994 "uuid": "338af7bd-b6d5-4329-b9bb-65fba82f0825", 00:13:40.994 "is_configured": true, 00:13:40.994 "data_offset": 0, 00:13:40.994 "data_size": 65536 00:13:40.994 }, 00:13:40.994 { 00:13:40.994 "name": "BaseBdev3", 00:13:40.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.994 "is_configured": false, 00:13:40.994 "data_offset": 0, 00:13:40.994 "data_size": 0 00:13:40.994 } 00:13:40.994 ] 00:13:40.994 }' 00:13:40.994 06:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.994 06:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.563 06:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:41.563 06:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.563 06:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.563 [2024-12-06 06:40:00.041033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:41.563 [2024-12-06 06:40:00.041099] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:41.563 [2024-12-06 06:40:00.041120] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:41.563 [2024-12-06 06:40:00.041470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:41.563 [2024-12-06 06:40:00.041751] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:41.563 [2024-12-06 06:40:00.041769] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:41.563 [2024-12-06 06:40:00.042084] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.563 BaseBdev3 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.563 [ 00:13:41.563 { 00:13:41.563 "name": "BaseBdev3", 00:13:41.563 "aliases": [ 00:13:41.563 "48f43c3b-636f-4445-b456-37cf0ea8280b" 00:13:41.563 ], 00:13:41.563 "product_name": "Malloc disk", 00:13:41.563 "block_size": 512, 00:13:41.563 "num_blocks": 65536, 00:13:41.563 "uuid": "48f43c3b-636f-4445-b456-37cf0ea8280b", 00:13:41.563 "assigned_rate_limits": { 00:13:41.563 "rw_ios_per_sec": 0, 00:13:41.563 "rw_mbytes_per_sec": 0, 00:13:41.563 "r_mbytes_per_sec": 0, 00:13:41.563 "w_mbytes_per_sec": 0 00:13:41.563 }, 00:13:41.563 "claimed": true, 00:13:41.563 "claim_type": "exclusive_write", 00:13:41.563 "zoned": false, 00:13:41.563 "supported_io_types": { 00:13:41.563 "read": true, 00:13:41.563 "write": true, 00:13:41.563 "unmap": true, 00:13:41.563 "flush": true, 00:13:41.563 "reset": true, 00:13:41.563 "nvme_admin": false, 00:13:41.563 "nvme_io": false, 00:13:41.563 "nvme_io_md": false, 00:13:41.563 "write_zeroes": true, 00:13:41.563 "zcopy": true, 00:13:41.563 "get_zone_info": false, 00:13:41.563 "zone_management": false, 00:13:41.563 "zone_append": false, 00:13:41.563 "compare": false, 00:13:41.563 "compare_and_write": false, 00:13:41.563 "abort": true, 00:13:41.563 "seek_hole": false, 00:13:41.563 "seek_data": false, 00:13:41.563 "copy": true, 00:13:41.563 "nvme_iov_md": false 00:13:41.563 }, 00:13:41.563 "memory_domains": [ 00:13:41.563 { 00:13:41.563 "dma_device_id": "system", 00:13:41.563 "dma_device_type": 1 00:13:41.563 }, 00:13:41.563 { 00:13:41.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.563 "dma_device_type": 2 00:13:41.563 } 00:13:41.563 ], 00:13:41.563 "driver_specific": {} 00:13:41.563 } 00:13:41.563 ] 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.563 "name": "Existed_Raid", 00:13:41.563 "uuid": "c177bf42-466d-465e-9532-ebb2a63a21af", 00:13:41.563 "strip_size_kb": 64, 00:13:41.563 "state": "online", 00:13:41.563 "raid_level": "concat", 00:13:41.563 "superblock": false, 00:13:41.563 "num_base_bdevs": 3, 00:13:41.563 "num_base_bdevs_discovered": 3, 00:13:41.563 "num_base_bdevs_operational": 3, 00:13:41.563 "base_bdevs_list": [ 00:13:41.563 { 00:13:41.563 "name": "BaseBdev1", 00:13:41.563 "uuid": "87526110-9216-4978-ac77-c9520629141e", 00:13:41.563 "is_configured": true, 00:13:41.563 "data_offset": 0, 00:13:41.563 "data_size": 65536 00:13:41.563 }, 00:13:41.563 { 00:13:41.563 "name": "BaseBdev2", 00:13:41.563 "uuid": "338af7bd-b6d5-4329-b9bb-65fba82f0825", 00:13:41.563 "is_configured": true, 00:13:41.563 "data_offset": 0, 00:13:41.563 "data_size": 65536 00:13:41.563 }, 00:13:41.563 { 00:13:41.563 "name": "BaseBdev3", 00:13:41.563 "uuid": "48f43c3b-636f-4445-b456-37cf0ea8280b", 00:13:41.563 "is_configured": true, 00:13:41.563 "data_offset": 0, 00:13:41.563 "data_size": 65536 00:13:41.563 } 00:13:41.563 ] 00:13:41.563 }' 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.563 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.152 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:42.152 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:42.152 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:42.152 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:42.152 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:42.152 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:42.152 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:42.152 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:42.152 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.152 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.152 [2024-12-06 06:40:00.597656] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:42.152 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.152 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:42.152 "name": "Existed_Raid", 00:13:42.152 "aliases": [ 00:13:42.152 "c177bf42-466d-465e-9532-ebb2a63a21af" 00:13:42.152 ], 00:13:42.152 "product_name": "Raid Volume", 00:13:42.152 "block_size": 512, 00:13:42.152 "num_blocks": 196608, 00:13:42.152 "uuid": "c177bf42-466d-465e-9532-ebb2a63a21af", 00:13:42.152 "assigned_rate_limits": { 00:13:42.152 "rw_ios_per_sec": 0, 00:13:42.152 "rw_mbytes_per_sec": 0, 00:13:42.152 "r_mbytes_per_sec": 0, 00:13:42.152 "w_mbytes_per_sec": 0 00:13:42.152 }, 00:13:42.152 "claimed": false, 00:13:42.152 "zoned": false, 00:13:42.152 "supported_io_types": { 00:13:42.152 "read": true, 00:13:42.152 "write": true, 00:13:42.152 "unmap": true, 00:13:42.152 "flush": true, 00:13:42.152 "reset": true, 00:13:42.152 "nvme_admin": false, 00:13:42.152 "nvme_io": false, 00:13:42.152 "nvme_io_md": false, 00:13:42.152 "write_zeroes": true, 00:13:42.152 "zcopy": false, 00:13:42.152 "get_zone_info": false, 00:13:42.152 "zone_management": false, 00:13:42.152 "zone_append": false, 00:13:42.152 "compare": false, 00:13:42.152 "compare_and_write": false, 00:13:42.152 "abort": false, 00:13:42.152 "seek_hole": false, 00:13:42.152 "seek_data": false, 00:13:42.152 "copy": false, 00:13:42.152 "nvme_iov_md": false 00:13:42.152 }, 00:13:42.152 "memory_domains": [ 00:13:42.152 { 00:13:42.152 "dma_device_id": "system", 00:13:42.152 "dma_device_type": 1 00:13:42.152 }, 00:13:42.152 { 00:13:42.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.152 "dma_device_type": 2 00:13:42.152 }, 00:13:42.152 { 00:13:42.152 "dma_device_id": "system", 00:13:42.152 "dma_device_type": 1 00:13:42.152 }, 00:13:42.152 { 00:13:42.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.152 "dma_device_type": 2 00:13:42.152 }, 00:13:42.152 { 00:13:42.152 "dma_device_id": "system", 00:13:42.152 "dma_device_type": 1 00:13:42.152 }, 00:13:42.152 { 00:13:42.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.152 "dma_device_type": 2 00:13:42.152 } 00:13:42.152 ], 00:13:42.152 "driver_specific": { 00:13:42.152 "raid": { 00:13:42.152 "uuid": "c177bf42-466d-465e-9532-ebb2a63a21af", 00:13:42.152 "strip_size_kb": 64, 00:13:42.152 "state": "online", 00:13:42.152 "raid_level": "concat", 00:13:42.152 "superblock": false, 00:13:42.152 "num_base_bdevs": 3, 00:13:42.152 "num_base_bdevs_discovered": 3, 00:13:42.152 "num_base_bdevs_operational": 3, 00:13:42.152 "base_bdevs_list": [ 00:13:42.152 { 00:13:42.152 "name": "BaseBdev1", 00:13:42.152 "uuid": "87526110-9216-4978-ac77-c9520629141e", 00:13:42.152 "is_configured": true, 00:13:42.152 "data_offset": 0, 00:13:42.152 "data_size": 65536 00:13:42.152 }, 00:13:42.152 { 00:13:42.152 "name": "BaseBdev2", 00:13:42.152 "uuid": "338af7bd-b6d5-4329-b9bb-65fba82f0825", 00:13:42.152 "is_configured": true, 00:13:42.152 "data_offset": 0, 00:13:42.152 "data_size": 65536 00:13:42.152 }, 00:13:42.152 { 00:13:42.152 "name": "BaseBdev3", 00:13:42.153 "uuid": "48f43c3b-636f-4445-b456-37cf0ea8280b", 00:13:42.153 "is_configured": true, 00:13:42.153 "data_offset": 0, 00:13:42.153 "data_size": 65536 00:13:42.153 } 00:13:42.153 ] 00:13:42.153 } 00:13:42.153 } 00:13:42.153 }' 00:13:42.153 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:42.153 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:42.153 BaseBdev2 00:13:42.153 BaseBdev3' 00:13:42.153 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.153 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:42.153 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.153 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.153 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:42.153 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.153 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.153 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.411 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.411 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.411 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.412 [2024-12-06 06:40:00.901391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:42.412 [2024-12-06 06:40:00.901560] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:42.412 [2024-12-06 06:40:00.901656] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.412 06:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.412 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.412 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.412 "name": "Existed_Raid", 00:13:42.412 "uuid": "c177bf42-466d-465e-9532-ebb2a63a21af", 00:13:42.412 "strip_size_kb": 64, 00:13:42.412 "state": "offline", 00:13:42.412 "raid_level": "concat", 00:13:42.412 "superblock": false, 00:13:42.412 "num_base_bdevs": 3, 00:13:42.412 "num_base_bdevs_discovered": 2, 00:13:42.412 "num_base_bdevs_operational": 2, 00:13:42.412 "base_bdevs_list": [ 00:13:42.412 { 00:13:42.412 "name": null, 00:13:42.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.412 "is_configured": false, 00:13:42.412 "data_offset": 0, 00:13:42.412 "data_size": 65536 00:13:42.412 }, 00:13:42.412 { 00:13:42.412 "name": "BaseBdev2", 00:13:42.412 "uuid": "338af7bd-b6d5-4329-b9bb-65fba82f0825", 00:13:42.412 "is_configured": true, 00:13:42.412 "data_offset": 0, 00:13:42.412 "data_size": 65536 00:13:42.412 }, 00:13:42.412 { 00:13:42.412 "name": "BaseBdev3", 00:13:42.412 "uuid": "48f43c3b-636f-4445-b456-37cf0ea8280b", 00:13:42.412 "is_configured": true, 00:13:42.412 "data_offset": 0, 00:13:42.412 "data_size": 65536 00:13:42.412 } 00:13:42.412 ] 00:13:42.412 }' 00:13:42.412 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.412 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.980 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:42.980 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:42.980 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.980 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:42.980 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.980 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.980 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.980 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:42.980 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:42.980 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:42.980 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.980 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.980 [2024-12-06 06:40:01.551696] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:43.239 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.239 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:43.239 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:43.239 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.239 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:43.239 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.239 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.239 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.239 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:43.239 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:43.239 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:43.239 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.239 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.239 [2024-12-06 06:40:01.693958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:43.239 [2024-12-06 06:40:01.694022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:43.239 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.239 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:43.239 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:43.239 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.239 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.239 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.239 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:43.239 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.239 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:43.239 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:43.239 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:43.239 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:43.239 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:43.239 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:43.239 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.239 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.498 BaseBdev2 00:13:43.498 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.498 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:43.498 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:43.498 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:43.498 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:43.498 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:43.498 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:43.498 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:43.498 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.498 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.498 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.498 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:43.498 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.498 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.498 [ 00:13:43.498 { 00:13:43.498 "name": "BaseBdev2", 00:13:43.498 "aliases": [ 00:13:43.498 "192e26d2-f436-4d75-ad30-9f84d18c98d0" 00:13:43.498 ], 00:13:43.498 "product_name": "Malloc disk", 00:13:43.498 "block_size": 512, 00:13:43.498 "num_blocks": 65536, 00:13:43.498 "uuid": "192e26d2-f436-4d75-ad30-9f84d18c98d0", 00:13:43.498 "assigned_rate_limits": { 00:13:43.498 "rw_ios_per_sec": 0, 00:13:43.498 "rw_mbytes_per_sec": 0, 00:13:43.498 "r_mbytes_per_sec": 0, 00:13:43.498 "w_mbytes_per_sec": 0 00:13:43.498 }, 00:13:43.498 "claimed": false, 00:13:43.498 "zoned": false, 00:13:43.498 "supported_io_types": { 00:13:43.498 "read": true, 00:13:43.498 "write": true, 00:13:43.498 "unmap": true, 00:13:43.498 "flush": true, 00:13:43.498 "reset": true, 00:13:43.498 "nvme_admin": false, 00:13:43.498 "nvme_io": false, 00:13:43.498 "nvme_io_md": false, 00:13:43.498 "write_zeroes": true, 00:13:43.498 "zcopy": true, 00:13:43.498 "get_zone_info": false, 00:13:43.498 "zone_management": false, 00:13:43.498 "zone_append": false, 00:13:43.498 "compare": false, 00:13:43.498 "compare_and_write": false, 00:13:43.498 "abort": true, 00:13:43.498 "seek_hole": false, 00:13:43.498 "seek_data": false, 00:13:43.498 "copy": true, 00:13:43.498 "nvme_iov_md": false 00:13:43.498 }, 00:13:43.498 "memory_domains": [ 00:13:43.498 { 00:13:43.498 "dma_device_id": "system", 00:13:43.498 "dma_device_type": 1 00:13:43.498 }, 00:13:43.498 { 00:13:43.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.498 "dma_device_type": 2 00:13:43.498 } 00:13:43.498 ], 00:13:43.498 "driver_specific": {} 00:13:43.498 } 00:13:43.498 ] 00:13:43.498 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.498 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.499 BaseBdev3 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.499 [ 00:13:43.499 { 00:13:43.499 "name": "BaseBdev3", 00:13:43.499 "aliases": [ 00:13:43.499 "17cc9735-e48e-40b1-896d-0d07b04b1e32" 00:13:43.499 ], 00:13:43.499 "product_name": "Malloc disk", 00:13:43.499 "block_size": 512, 00:13:43.499 "num_blocks": 65536, 00:13:43.499 "uuid": "17cc9735-e48e-40b1-896d-0d07b04b1e32", 00:13:43.499 "assigned_rate_limits": { 00:13:43.499 "rw_ios_per_sec": 0, 00:13:43.499 "rw_mbytes_per_sec": 0, 00:13:43.499 "r_mbytes_per_sec": 0, 00:13:43.499 "w_mbytes_per_sec": 0 00:13:43.499 }, 00:13:43.499 "claimed": false, 00:13:43.499 "zoned": false, 00:13:43.499 "supported_io_types": { 00:13:43.499 "read": true, 00:13:43.499 "write": true, 00:13:43.499 "unmap": true, 00:13:43.499 "flush": true, 00:13:43.499 "reset": true, 00:13:43.499 "nvme_admin": false, 00:13:43.499 "nvme_io": false, 00:13:43.499 "nvme_io_md": false, 00:13:43.499 "write_zeroes": true, 00:13:43.499 "zcopy": true, 00:13:43.499 "get_zone_info": false, 00:13:43.499 "zone_management": false, 00:13:43.499 "zone_append": false, 00:13:43.499 "compare": false, 00:13:43.499 "compare_and_write": false, 00:13:43.499 "abort": true, 00:13:43.499 "seek_hole": false, 00:13:43.499 "seek_data": false, 00:13:43.499 "copy": true, 00:13:43.499 "nvme_iov_md": false 00:13:43.499 }, 00:13:43.499 "memory_domains": [ 00:13:43.499 { 00:13:43.499 "dma_device_id": "system", 00:13:43.499 "dma_device_type": 1 00:13:43.499 }, 00:13:43.499 { 00:13:43.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.499 "dma_device_type": 2 00:13:43.499 } 00:13:43.499 ], 00:13:43.499 "driver_specific": {} 00:13:43.499 } 00:13:43.499 ] 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.499 [2024-12-06 06:40:01.996141] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:43.499 [2024-12-06 06:40:01.996198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:43.499 [2024-12-06 06:40:01.996234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:43.499 [2024-12-06 06:40:01.998703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:43.499 06:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:43.499 06:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.499 06:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.499 06:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.499 06:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.499 06:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.499 06:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.499 06:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.499 06:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.499 06:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.499 06:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.499 "name": "Existed_Raid", 00:13:43.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.499 "strip_size_kb": 64, 00:13:43.499 "state": "configuring", 00:13:43.499 "raid_level": "concat", 00:13:43.499 "superblock": false, 00:13:43.499 "num_base_bdevs": 3, 00:13:43.499 "num_base_bdevs_discovered": 2, 00:13:43.499 "num_base_bdevs_operational": 3, 00:13:43.499 "base_bdevs_list": [ 00:13:43.499 { 00:13:43.499 "name": "BaseBdev1", 00:13:43.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.499 "is_configured": false, 00:13:43.499 "data_offset": 0, 00:13:43.499 "data_size": 0 00:13:43.499 }, 00:13:43.499 { 00:13:43.499 "name": "BaseBdev2", 00:13:43.499 "uuid": "192e26d2-f436-4d75-ad30-9f84d18c98d0", 00:13:43.499 "is_configured": true, 00:13:43.499 "data_offset": 0, 00:13:43.499 "data_size": 65536 00:13:43.499 }, 00:13:43.499 { 00:13:43.499 "name": "BaseBdev3", 00:13:43.499 "uuid": "17cc9735-e48e-40b1-896d-0d07b04b1e32", 00:13:43.499 "is_configured": true, 00:13:43.499 "data_offset": 0, 00:13:43.499 "data_size": 65536 00:13:43.499 } 00:13:43.499 ] 00:13:43.499 }' 00:13:43.499 06:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.499 06:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.066 06:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:44.066 06:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.066 06:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.066 [2024-12-06 06:40:02.512310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:44.066 06:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.066 06:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:44.066 06:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.066 06:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.066 06:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:44.066 06:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.066 06:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.066 06:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.066 06:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.066 06:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.066 06:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.066 06:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.066 06:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.066 06:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.066 06:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.066 06:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.066 06:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.066 "name": "Existed_Raid", 00:13:44.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.066 "strip_size_kb": 64, 00:13:44.066 "state": "configuring", 00:13:44.066 "raid_level": "concat", 00:13:44.066 "superblock": false, 00:13:44.066 "num_base_bdevs": 3, 00:13:44.066 "num_base_bdevs_discovered": 1, 00:13:44.066 "num_base_bdevs_operational": 3, 00:13:44.066 "base_bdevs_list": [ 00:13:44.066 { 00:13:44.066 "name": "BaseBdev1", 00:13:44.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.066 "is_configured": false, 00:13:44.066 "data_offset": 0, 00:13:44.066 "data_size": 0 00:13:44.066 }, 00:13:44.066 { 00:13:44.066 "name": null, 00:13:44.066 "uuid": "192e26d2-f436-4d75-ad30-9f84d18c98d0", 00:13:44.066 "is_configured": false, 00:13:44.066 "data_offset": 0, 00:13:44.066 "data_size": 65536 00:13:44.066 }, 00:13:44.066 { 00:13:44.066 "name": "BaseBdev3", 00:13:44.066 "uuid": "17cc9735-e48e-40b1-896d-0d07b04b1e32", 00:13:44.066 "is_configured": true, 00:13:44.066 "data_offset": 0, 00:13:44.066 "data_size": 65536 00:13:44.066 } 00:13:44.066 ] 00:13:44.066 }' 00:13:44.066 06:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.066 06:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.634 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.634 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:44.634 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.634 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.634 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.634 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:44.634 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:44.634 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.634 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.634 [2024-12-06 06:40:03.128038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:44.634 BaseBdev1 00:13:44.634 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.634 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:44.634 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:44.634 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:44.634 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:44.634 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:44.634 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:44.634 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:44.634 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.634 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.634 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.634 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:44.634 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.635 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.635 [ 00:13:44.635 { 00:13:44.635 "name": "BaseBdev1", 00:13:44.635 "aliases": [ 00:13:44.635 "fd6d97cb-5b94-49e8-b980-bb5d29830284" 00:13:44.635 ], 00:13:44.635 "product_name": "Malloc disk", 00:13:44.635 "block_size": 512, 00:13:44.635 "num_blocks": 65536, 00:13:44.635 "uuid": "fd6d97cb-5b94-49e8-b980-bb5d29830284", 00:13:44.635 "assigned_rate_limits": { 00:13:44.635 "rw_ios_per_sec": 0, 00:13:44.635 "rw_mbytes_per_sec": 0, 00:13:44.635 "r_mbytes_per_sec": 0, 00:13:44.635 "w_mbytes_per_sec": 0 00:13:44.635 }, 00:13:44.635 "claimed": true, 00:13:44.635 "claim_type": "exclusive_write", 00:13:44.635 "zoned": false, 00:13:44.635 "supported_io_types": { 00:13:44.635 "read": true, 00:13:44.635 "write": true, 00:13:44.635 "unmap": true, 00:13:44.635 "flush": true, 00:13:44.635 "reset": true, 00:13:44.635 "nvme_admin": false, 00:13:44.635 "nvme_io": false, 00:13:44.635 "nvme_io_md": false, 00:13:44.635 "write_zeroes": true, 00:13:44.635 "zcopy": true, 00:13:44.635 "get_zone_info": false, 00:13:44.635 "zone_management": false, 00:13:44.635 "zone_append": false, 00:13:44.635 "compare": false, 00:13:44.635 "compare_and_write": false, 00:13:44.635 "abort": true, 00:13:44.635 "seek_hole": false, 00:13:44.635 "seek_data": false, 00:13:44.635 "copy": true, 00:13:44.635 "nvme_iov_md": false 00:13:44.635 }, 00:13:44.635 "memory_domains": [ 00:13:44.635 { 00:13:44.635 "dma_device_id": "system", 00:13:44.635 "dma_device_type": 1 00:13:44.635 }, 00:13:44.635 { 00:13:44.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.635 "dma_device_type": 2 00:13:44.635 } 00:13:44.635 ], 00:13:44.635 "driver_specific": {} 00:13:44.635 } 00:13:44.635 ] 00:13:44.635 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.635 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:44.635 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:44.635 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.635 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.635 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:44.635 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.635 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.635 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.635 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.635 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.635 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.635 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.635 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.635 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.635 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.635 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.635 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.635 "name": "Existed_Raid", 00:13:44.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.635 "strip_size_kb": 64, 00:13:44.635 "state": "configuring", 00:13:44.635 "raid_level": "concat", 00:13:44.635 "superblock": false, 00:13:44.635 "num_base_bdevs": 3, 00:13:44.635 "num_base_bdevs_discovered": 2, 00:13:44.635 "num_base_bdevs_operational": 3, 00:13:44.635 "base_bdevs_list": [ 00:13:44.635 { 00:13:44.635 "name": "BaseBdev1", 00:13:44.635 "uuid": "fd6d97cb-5b94-49e8-b980-bb5d29830284", 00:13:44.635 "is_configured": true, 00:13:44.635 "data_offset": 0, 00:13:44.635 "data_size": 65536 00:13:44.635 }, 00:13:44.635 { 00:13:44.635 "name": null, 00:13:44.635 "uuid": "192e26d2-f436-4d75-ad30-9f84d18c98d0", 00:13:44.635 "is_configured": false, 00:13:44.635 "data_offset": 0, 00:13:44.635 "data_size": 65536 00:13:44.635 }, 00:13:44.635 { 00:13:44.635 "name": "BaseBdev3", 00:13:44.635 "uuid": "17cc9735-e48e-40b1-896d-0d07b04b1e32", 00:13:44.635 "is_configured": true, 00:13:44.635 "data_offset": 0, 00:13:44.635 "data_size": 65536 00:13:44.635 } 00:13:44.635 ] 00:13:44.635 }' 00:13:44.635 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.635 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.200 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:45.200 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.200 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.200 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.200 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.200 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:45.200 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:45.200 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.200 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.200 [2024-12-06 06:40:03.740290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:45.200 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.200 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:45.200 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.200 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.200 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:45.200 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.200 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.200 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.200 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.200 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.200 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.200 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.200 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.200 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.200 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.200 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.200 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.200 "name": "Existed_Raid", 00:13:45.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.200 "strip_size_kb": 64, 00:13:45.200 "state": "configuring", 00:13:45.200 "raid_level": "concat", 00:13:45.200 "superblock": false, 00:13:45.200 "num_base_bdevs": 3, 00:13:45.200 "num_base_bdevs_discovered": 1, 00:13:45.200 "num_base_bdevs_operational": 3, 00:13:45.200 "base_bdevs_list": [ 00:13:45.200 { 00:13:45.200 "name": "BaseBdev1", 00:13:45.200 "uuid": "fd6d97cb-5b94-49e8-b980-bb5d29830284", 00:13:45.200 "is_configured": true, 00:13:45.200 "data_offset": 0, 00:13:45.200 "data_size": 65536 00:13:45.200 }, 00:13:45.200 { 00:13:45.200 "name": null, 00:13:45.200 "uuid": "192e26d2-f436-4d75-ad30-9f84d18c98d0", 00:13:45.200 "is_configured": false, 00:13:45.200 "data_offset": 0, 00:13:45.200 "data_size": 65536 00:13:45.200 }, 00:13:45.200 { 00:13:45.200 "name": null, 00:13:45.200 "uuid": "17cc9735-e48e-40b1-896d-0d07b04b1e32", 00:13:45.200 "is_configured": false, 00:13:45.200 "data_offset": 0, 00:13:45.200 "data_size": 65536 00:13:45.200 } 00:13:45.200 ] 00:13:45.200 }' 00:13:45.200 06:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.200 06:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.769 06:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.769 06:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:45.769 06:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.769 06:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.769 06:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.769 06:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:45.769 06:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:45.769 06:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.769 06:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.769 [2024-12-06 06:40:04.324507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:45.769 06:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.769 06:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:45.769 06:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.769 06:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.769 06:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:45.769 06:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.769 06:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.769 06:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.769 06:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.769 06:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.769 06:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.769 06:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.769 06:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.769 06:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.769 06:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.769 06:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.769 06:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.769 "name": "Existed_Raid", 00:13:45.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.769 "strip_size_kb": 64, 00:13:45.769 "state": "configuring", 00:13:45.769 "raid_level": "concat", 00:13:45.769 "superblock": false, 00:13:45.769 "num_base_bdevs": 3, 00:13:45.769 "num_base_bdevs_discovered": 2, 00:13:45.769 "num_base_bdevs_operational": 3, 00:13:45.769 "base_bdevs_list": [ 00:13:45.769 { 00:13:45.769 "name": "BaseBdev1", 00:13:45.769 "uuid": "fd6d97cb-5b94-49e8-b980-bb5d29830284", 00:13:45.769 "is_configured": true, 00:13:45.769 "data_offset": 0, 00:13:45.769 "data_size": 65536 00:13:45.769 }, 00:13:45.769 { 00:13:45.769 "name": null, 00:13:45.769 "uuid": "192e26d2-f436-4d75-ad30-9f84d18c98d0", 00:13:45.769 "is_configured": false, 00:13:45.769 "data_offset": 0, 00:13:45.769 "data_size": 65536 00:13:45.769 }, 00:13:45.769 { 00:13:45.769 "name": "BaseBdev3", 00:13:45.769 "uuid": "17cc9735-e48e-40b1-896d-0d07b04b1e32", 00:13:45.769 "is_configured": true, 00:13:45.769 "data_offset": 0, 00:13:45.769 "data_size": 65536 00:13:45.769 } 00:13:45.769 ] 00:13:45.769 }' 00:13:45.769 06:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.769 06:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.336 06:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.336 06:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:46.336 06:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.336 06:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.336 06:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.336 06:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:46.336 06:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:46.336 06:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.336 06:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.336 [2024-12-06 06:40:04.916715] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:46.608 06:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.608 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:46.608 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.608 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.608 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:46.608 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.608 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.608 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.608 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.608 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.608 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.608 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.608 06:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.608 06:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.608 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.608 06:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.608 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.608 "name": "Existed_Raid", 00:13:46.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.608 "strip_size_kb": 64, 00:13:46.608 "state": "configuring", 00:13:46.608 "raid_level": "concat", 00:13:46.608 "superblock": false, 00:13:46.608 "num_base_bdevs": 3, 00:13:46.608 "num_base_bdevs_discovered": 1, 00:13:46.608 "num_base_bdevs_operational": 3, 00:13:46.608 "base_bdevs_list": [ 00:13:46.608 { 00:13:46.608 "name": null, 00:13:46.609 "uuid": "fd6d97cb-5b94-49e8-b980-bb5d29830284", 00:13:46.609 "is_configured": false, 00:13:46.609 "data_offset": 0, 00:13:46.609 "data_size": 65536 00:13:46.609 }, 00:13:46.609 { 00:13:46.609 "name": null, 00:13:46.609 "uuid": "192e26d2-f436-4d75-ad30-9f84d18c98d0", 00:13:46.609 "is_configured": false, 00:13:46.609 "data_offset": 0, 00:13:46.609 "data_size": 65536 00:13:46.609 }, 00:13:46.609 { 00:13:46.609 "name": "BaseBdev3", 00:13:46.609 "uuid": "17cc9735-e48e-40b1-896d-0d07b04b1e32", 00:13:46.609 "is_configured": true, 00:13:46.609 "data_offset": 0, 00:13:46.609 "data_size": 65536 00:13:46.609 } 00:13:46.609 ] 00:13:46.609 }' 00:13:46.609 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.609 06:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.867 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.867 06:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.867 06:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.867 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:47.124 06:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.124 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:47.124 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:47.124 06:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.124 06:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.124 [2024-12-06 06:40:05.555231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:47.124 06:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.124 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:47.124 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.124 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.124 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:47.124 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.124 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.124 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.124 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.124 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.124 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.124 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.124 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.124 06:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.124 06:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.124 06:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.124 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.124 "name": "Existed_Raid", 00:13:47.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.124 "strip_size_kb": 64, 00:13:47.124 "state": "configuring", 00:13:47.124 "raid_level": "concat", 00:13:47.124 "superblock": false, 00:13:47.124 "num_base_bdevs": 3, 00:13:47.124 "num_base_bdevs_discovered": 2, 00:13:47.124 "num_base_bdevs_operational": 3, 00:13:47.124 "base_bdevs_list": [ 00:13:47.124 { 00:13:47.125 "name": null, 00:13:47.125 "uuid": "fd6d97cb-5b94-49e8-b980-bb5d29830284", 00:13:47.125 "is_configured": false, 00:13:47.125 "data_offset": 0, 00:13:47.125 "data_size": 65536 00:13:47.125 }, 00:13:47.125 { 00:13:47.125 "name": "BaseBdev2", 00:13:47.125 "uuid": "192e26d2-f436-4d75-ad30-9f84d18c98d0", 00:13:47.125 "is_configured": true, 00:13:47.125 "data_offset": 0, 00:13:47.125 "data_size": 65536 00:13:47.125 }, 00:13:47.125 { 00:13:47.125 "name": "BaseBdev3", 00:13:47.125 "uuid": "17cc9735-e48e-40b1-896d-0d07b04b1e32", 00:13:47.125 "is_configured": true, 00:13:47.125 "data_offset": 0, 00:13:47.125 "data_size": 65536 00:13:47.125 } 00:13:47.125 ] 00:13:47.125 }' 00:13:47.125 06:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.125 06:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fd6d97cb-5b94-49e8-b980-bb5d29830284 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.690 [2024-12-06 06:40:06.241015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:47.690 [2024-12-06 06:40:06.241063] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:47.690 [2024-12-06 06:40:06.241078] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:47.690 [2024-12-06 06:40:06.241405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:47.690 [2024-12-06 06:40:06.241638] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:47.690 [2024-12-06 06:40:06.241656] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:47.690 [2024-12-06 06:40:06.241948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.690 NewBaseBdev 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.690 [ 00:13:47.690 { 00:13:47.690 "name": "NewBaseBdev", 00:13:47.690 "aliases": [ 00:13:47.690 "fd6d97cb-5b94-49e8-b980-bb5d29830284" 00:13:47.690 ], 00:13:47.690 "product_name": "Malloc disk", 00:13:47.690 "block_size": 512, 00:13:47.690 "num_blocks": 65536, 00:13:47.690 "uuid": "fd6d97cb-5b94-49e8-b980-bb5d29830284", 00:13:47.690 "assigned_rate_limits": { 00:13:47.690 "rw_ios_per_sec": 0, 00:13:47.690 "rw_mbytes_per_sec": 0, 00:13:47.690 "r_mbytes_per_sec": 0, 00:13:47.690 "w_mbytes_per_sec": 0 00:13:47.690 }, 00:13:47.690 "claimed": true, 00:13:47.690 "claim_type": "exclusive_write", 00:13:47.690 "zoned": false, 00:13:47.690 "supported_io_types": { 00:13:47.690 "read": true, 00:13:47.690 "write": true, 00:13:47.690 "unmap": true, 00:13:47.690 "flush": true, 00:13:47.690 "reset": true, 00:13:47.690 "nvme_admin": false, 00:13:47.690 "nvme_io": false, 00:13:47.690 "nvme_io_md": false, 00:13:47.690 "write_zeroes": true, 00:13:47.690 "zcopy": true, 00:13:47.690 "get_zone_info": false, 00:13:47.690 "zone_management": false, 00:13:47.690 "zone_append": false, 00:13:47.690 "compare": false, 00:13:47.690 "compare_and_write": false, 00:13:47.690 "abort": true, 00:13:47.690 "seek_hole": false, 00:13:47.690 "seek_data": false, 00:13:47.690 "copy": true, 00:13:47.690 "nvme_iov_md": false 00:13:47.690 }, 00:13:47.690 "memory_domains": [ 00:13:47.690 { 00:13:47.690 "dma_device_id": "system", 00:13:47.690 "dma_device_type": 1 00:13:47.690 }, 00:13:47.690 { 00:13:47.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.690 "dma_device_type": 2 00:13:47.690 } 00:13:47.690 ], 00:13:47.690 "driver_specific": {} 00:13:47.690 } 00:13:47.690 ] 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.690 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.948 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.948 "name": "Existed_Raid", 00:13:47.948 "uuid": "38b4de17-aba9-475e-aae1-7f38cfaa86a9", 00:13:47.948 "strip_size_kb": 64, 00:13:47.948 "state": "online", 00:13:47.948 "raid_level": "concat", 00:13:47.948 "superblock": false, 00:13:47.948 "num_base_bdevs": 3, 00:13:47.948 "num_base_bdevs_discovered": 3, 00:13:47.948 "num_base_bdevs_operational": 3, 00:13:47.948 "base_bdevs_list": [ 00:13:47.948 { 00:13:47.948 "name": "NewBaseBdev", 00:13:47.948 "uuid": "fd6d97cb-5b94-49e8-b980-bb5d29830284", 00:13:47.948 "is_configured": true, 00:13:47.948 "data_offset": 0, 00:13:47.948 "data_size": 65536 00:13:47.948 }, 00:13:47.948 { 00:13:47.948 "name": "BaseBdev2", 00:13:47.948 "uuid": "192e26d2-f436-4d75-ad30-9f84d18c98d0", 00:13:47.948 "is_configured": true, 00:13:47.948 "data_offset": 0, 00:13:47.948 "data_size": 65536 00:13:47.948 }, 00:13:47.948 { 00:13:47.948 "name": "BaseBdev3", 00:13:47.948 "uuid": "17cc9735-e48e-40b1-896d-0d07b04b1e32", 00:13:47.948 "is_configured": true, 00:13:47.948 "data_offset": 0, 00:13:47.948 "data_size": 65536 00:13:47.948 } 00:13:47.948 ] 00:13:47.948 }' 00:13:47.948 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.948 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.207 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:48.207 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:48.207 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:48.207 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:48.207 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:48.207 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:48.207 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:48.207 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:48.207 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.207 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.207 [2024-12-06 06:40:06.789612] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:48.207 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.207 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:48.207 "name": "Existed_Raid", 00:13:48.207 "aliases": [ 00:13:48.207 "38b4de17-aba9-475e-aae1-7f38cfaa86a9" 00:13:48.207 ], 00:13:48.207 "product_name": "Raid Volume", 00:13:48.207 "block_size": 512, 00:13:48.207 "num_blocks": 196608, 00:13:48.207 "uuid": "38b4de17-aba9-475e-aae1-7f38cfaa86a9", 00:13:48.207 "assigned_rate_limits": { 00:13:48.207 "rw_ios_per_sec": 0, 00:13:48.207 "rw_mbytes_per_sec": 0, 00:13:48.207 "r_mbytes_per_sec": 0, 00:13:48.207 "w_mbytes_per_sec": 0 00:13:48.207 }, 00:13:48.207 "claimed": false, 00:13:48.207 "zoned": false, 00:13:48.207 "supported_io_types": { 00:13:48.207 "read": true, 00:13:48.207 "write": true, 00:13:48.207 "unmap": true, 00:13:48.207 "flush": true, 00:13:48.207 "reset": true, 00:13:48.207 "nvme_admin": false, 00:13:48.207 "nvme_io": false, 00:13:48.207 "nvme_io_md": false, 00:13:48.207 "write_zeroes": true, 00:13:48.207 "zcopy": false, 00:13:48.207 "get_zone_info": false, 00:13:48.207 "zone_management": false, 00:13:48.207 "zone_append": false, 00:13:48.207 "compare": false, 00:13:48.207 "compare_and_write": false, 00:13:48.207 "abort": false, 00:13:48.207 "seek_hole": false, 00:13:48.207 "seek_data": false, 00:13:48.207 "copy": false, 00:13:48.207 "nvme_iov_md": false 00:13:48.207 }, 00:13:48.207 "memory_domains": [ 00:13:48.207 { 00:13:48.207 "dma_device_id": "system", 00:13:48.207 "dma_device_type": 1 00:13:48.207 }, 00:13:48.207 { 00:13:48.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.207 "dma_device_type": 2 00:13:48.207 }, 00:13:48.207 { 00:13:48.207 "dma_device_id": "system", 00:13:48.207 "dma_device_type": 1 00:13:48.207 }, 00:13:48.207 { 00:13:48.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.207 "dma_device_type": 2 00:13:48.207 }, 00:13:48.207 { 00:13:48.207 "dma_device_id": "system", 00:13:48.207 "dma_device_type": 1 00:13:48.207 }, 00:13:48.207 { 00:13:48.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.207 "dma_device_type": 2 00:13:48.207 } 00:13:48.207 ], 00:13:48.207 "driver_specific": { 00:13:48.207 "raid": { 00:13:48.207 "uuid": "38b4de17-aba9-475e-aae1-7f38cfaa86a9", 00:13:48.207 "strip_size_kb": 64, 00:13:48.207 "state": "online", 00:13:48.207 "raid_level": "concat", 00:13:48.207 "superblock": false, 00:13:48.207 "num_base_bdevs": 3, 00:13:48.207 "num_base_bdevs_discovered": 3, 00:13:48.207 "num_base_bdevs_operational": 3, 00:13:48.207 "base_bdevs_list": [ 00:13:48.207 { 00:13:48.207 "name": "NewBaseBdev", 00:13:48.207 "uuid": "fd6d97cb-5b94-49e8-b980-bb5d29830284", 00:13:48.207 "is_configured": true, 00:13:48.207 "data_offset": 0, 00:13:48.207 "data_size": 65536 00:13:48.207 }, 00:13:48.207 { 00:13:48.208 "name": "BaseBdev2", 00:13:48.208 "uuid": "192e26d2-f436-4d75-ad30-9f84d18c98d0", 00:13:48.208 "is_configured": true, 00:13:48.208 "data_offset": 0, 00:13:48.208 "data_size": 65536 00:13:48.208 }, 00:13:48.208 { 00:13:48.208 "name": "BaseBdev3", 00:13:48.208 "uuid": "17cc9735-e48e-40b1-896d-0d07b04b1e32", 00:13:48.208 "is_configured": true, 00:13:48.208 "data_offset": 0, 00:13:48.208 "data_size": 65536 00:13:48.208 } 00:13:48.208 ] 00:13:48.208 } 00:13:48.208 } 00:13:48.208 }' 00:13:48.208 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:48.467 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:48.467 BaseBdev2 00:13:48.467 BaseBdev3' 00:13:48.467 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:48.467 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:48.467 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:48.467 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:48.467 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:48.467 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.467 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.467 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.467 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:48.467 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:48.467 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:48.467 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:48.467 06:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:48.467 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.467 06:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.467 06:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.467 06:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:48.467 06:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:48.467 06:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:48.467 06:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:48.467 06:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:48.467 06:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.467 06:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.467 06:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.467 06:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:48.467 06:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:48.467 06:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:48.467 06:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.467 06:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.467 [2024-12-06 06:40:07.105304] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:48.467 [2024-12-06 06:40:07.105341] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:48.467 [2024-12-06 06:40:07.105447] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:48.467 [2024-12-06 06:40:07.105538] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:48.467 [2024-12-06 06:40:07.105561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:48.467 06:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.467 06:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65824 00:13:48.467 06:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65824 ']' 00:13:48.467 06:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65824 00:13:48.726 06:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:48.726 06:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:48.726 06:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65824 00:13:48.726 06:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:48.726 killing process with pid 65824 00:13:48.726 06:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:48.726 06:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65824' 00:13:48.726 06:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65824 00:13:48.726 [2024-12-06 06:40:07.142374] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:48.726 06:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65824 00:13:48.984 [2024-12-06 06:40:07.416514] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:49.921 00:13:49.921 real 0m11.872s 00:13:49.921 user 0m19.663s 00:13:49.921 sys 0m1.638s 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.921 ************************************ 00:13:49.921 END TEST raid_state_function_test 00:13:49.921 ************************************ 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.921 06:40:08 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:13:49.921 06:40:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:49.921 06:40:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.921 06:40:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:49.921 ************************************ 00:13:49.921 START TEST raid_state_function_test_sb 00:13:49.921 ************************************ 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66460 00:13:49.921 Process raid pid: 66460 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66460' 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66460 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66460 ']' 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.921 06:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.180 [2024-12-06 06:40:08.640586] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:13:50.180 [2024-12-06 06:40:08.640723] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.180 [2024-12-06 06:40:08.815838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.438 [2024-12-06 06:40:08.950469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.701 [2024-12-06 06:40:09.162814] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:50.701 [2024-12-06 06:40:09.162871] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.265 06:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:51.265 06:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:51.265 06:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:51.265 06:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.265 06:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.265 [2024-12-06 06:40:09.676488] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:51.265 [2024-12-06 06:40:09.676565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:51.265 [2024-12-06 06:40:09.676582] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:51.265 [2024-12-06 06:40:09.676599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:51.265 [2024-12-06 06:40:09.676610] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:51.265 [2024-12-06 06:40:09.676625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:51.266 06:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.266 06:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:51.266 06:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.266 06:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.266 06:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:51.266 06:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.266 06:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.266 06:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.266 06:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.266 06:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.266 06:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.266 06:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.266 06:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.266 06:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.266 06:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.266 06:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.266 06:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.266 "name": "Existed_Raid", 00:13:51.266 "uuid": "4454cdff-49a9-4ceb-9db7-7c77c0bba1f4", 00:13:51.266 "strip_size_kb": 64, 00:13:51.266 "state": "configuring", 00:13:51.266 "raid_level": "concat", 00:13:51.266 "superblock": true, 00:13:51.266 "num_base_bdevs": 3, 00:13:51.266 "num_base_bdevs_discovered": 0, 00:13:51.266 "num_base_bdevs_operational": 3, 00:13:51.266 "base_bdevs_list": [ 00:13:51.266 { 00:13:51.266 "name": "BaseBdev1", 00:13:51.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.266 "is_configured": false, 00:13:51.266 "data_offset": 0, 00:13:51.266 "data_size": 0 00:13:51.266 }, 00:13:51.266 { 00:13:51.266 "name": "BaseBdev2", 00:13:51.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.266 "is_configured": false, 00:13:51.266 "data_offset": 0, 00:13:51.266 "data_size": 0 00:13:51.266 }, 00:13:51.266 { 00:13:51.266 "name": "BaseBdev3", 00:13:51.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.266 "is_configured": false, 00:13:51.266 "data_offset": 0, 00:13:51.266 "data_size": 0 00:13:51.266 } 00:13:51.266 ] 00:13:51.266 }' 00:13:51.266 06:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.266 06:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.830 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:51.830 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.830 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.831 [2024-12-06 06:40:10.188563] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:51.831 [2024-12-06 06:40:10.188609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.831 [2024-12-06 06:40:10.196564] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:51.831 [2024-12-06 06:40:10.196617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:51.831 [2024-12-06 06:40:10.196631] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:51.831 [2024-12-06 06:40:10.196647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:51.831 [2024-12-06 06:40:10.196657] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:51.831 [2024-12-06 06:40:10.196671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.831 [2024-12-06 06:40:10.241437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:51.831 BaseBdev1 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.831 [ 00:13:51.831 { 00:13:51.831 "name": "BaseBdev1", 00:13:51.831 "aliases": [ 00:13:51.831 "2b85d563-e32d-4a48-a23e-79fc4cd3c831" 00:13:51.831 ], 00:13:51.831 "product_name": "Malloc disk", 00:13:51.831 "block_size": 512, 00:13:51.831 "num_blocks": 65536, 00:13:51.831 "uuid": "2b85d563-e32d-4a48-a23e-79fc4cd3c831", 00:13:51.831 "assigned_rate_limits": { 00:13:51.831 "rw_ios_per_sec": 0, 00:13:51.831 "rw_mbytes_per_sec": 0, 00:13:51.831 "r_mbytes_per_sec": 0, 00:13:51.831 "w_mbytes_per_sec": 0 00:13:51.831 }, 00:13:51.831 "claimed": true, 00:13:51.831 "claim_type": "exclusive_write", 00:13:51.831 "zoned": false, 00:13:51.831 "supported_io_types": { 00:13:51.831 "read": true, 00:13:51.831 "write": true, 00:13:51.831 "unmap": true, 00:13:51.831 "flush": true, 00:13:51.831 "reset": true, 00:13:51.831 "nvme_admin": false, 00:13:51.831 "nvme_io": false, 00:13:51.831 "nvme_io_md": false, 00:13:51.831 "write_zeroes": true, 00:13:51.831 "zcopy": true, 00:13:51.831 "get_zone_info": false, 00:13:51.831 "zone_management": false, 00:13:51.831 "zone_append": false, 00:13:51.831 "compare": false, 00:13:51.831 "compare_and_write": false, 00:13:51.831 "abort": true, 00:13:51.831 "seek_hole": false, 00:13:51.831 "seek_data": false, 00:13:51.831 "copy": true, 00:13:51.831 "nvme_iov_md": false 00:13:51.831 }, 00:13:51.831 "memory_domains": [ 00:13:51.831 { 00:13:51.831 "dma_device_id": "system", 00:13:51.831 "dma_device_type": 1 00:13:51.831 }, 00:13:51.831 { 00:13:51.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.831 "dma_device_type": 2 00:13:51.831 } 00:13:51.831 ], 00:13:51.831 "driver_specific": {} 00:13:51.831 } 00:13:51.831 ] 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.831 "name": "Existed_Raid", 00:13:51.831 "uuid": "de872756-11a9-4f30-b7de-8bc75c658a29", 00:13:51.831 "strip_size_kb": 64, 00:13:51.831 "state": "configuring", 00:13:51.831 "raid_level": "concat", 00:13:51.831 "superblock": true, 00:13:51.831 "num_base_bdevs": 3, 00:13:51.831 "num_base_bdevs_discovered": 1, 00:13:51.831 "num_base_bdevs_operational": 3, 00:13:51.831 "base_bdevs_list": [ 00:13:51.831 { 00:13:51.831 "name": "BaseBdev1", 00:13:51.831 "uuid": "2b85d563-e32d-4a48-a23e-79fc4cd3c831", 00:13:51.831 "is_configured": true, 00:13:51.831 "data_offset": 2048, 00:13:51.831 "data_size": 63488 00:13:51.831 }, 00:13:51.831 { 00:13:51.831 "name": "BaseBdev2", 00:13:51.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.831 "is_configured": false, 00:13:51.831 "data_offset": 0, 00:13:51.831 "data_size": 0 00:13:51.831 }, 00:13:51.831 { 00:13:51.831 "name": "BaseBdev3", 00:13:51.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.831 "is_configured": false, 00:13:51.831 "data_offset": 0, 00:13:51.831 "data_size": 0 00:13:51.831 } 00:13:51.831 ] 00:13:51.831 }' 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.831 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.396 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:52.396 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.396 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.396 [2024-12-06 06:40:10.793654] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:52.396 [2024-12-06 06:40:10.793721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:52.396 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.396 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:52.396 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.396 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.396 [2024-12-06 06:40:10.801695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:52.396 [2024-12-06 06:40:10.804075] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:52.396 [2024-12-06 06:40:10.804126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:52.396 [2024-12-06 06:40:10.804141] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:52.396 [2024-12-06 06:40:10.804156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:52.396 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.396 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:52.396 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:52.396 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:52.396 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.396 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.396 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:52.396 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.396 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.396 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.396 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.396 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.396 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.396 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.396 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.396 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.396 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.396 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.396 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.396 "name": "Existed_Raid", 00:13:52.396 "uuid": "0ffc661a-22f8-407e-a169-dc5567e60db0", 00:13:52.396 "strip_size_kb": 64, 00:13:52.396 "state": "configuring", 00:13:52.396 "raid_level": "concat", 00:13:52.396 "superblock": true, 00:13:52.396 "num_base_bdevs": 3, 00:13:52.396 "num_base_bdevs_discovered": 1, 00:13:52.396 "num_base_bdevs_operational": 3, 00:13:52.396 "base_bdevs_list": [ 00:13:52.396 { 00:13:52.396 "name": "BaseBdev1", 00:13:52.396 "uuid": "2b85d563-e32d-4a48-a23e-79fc4cd3c831", 00:13:52.396 "is_configured": true, 00:13:52.396 "data_offset": 2048, 00:13:52.396 "data_size": 63488 00:13:52.396 }, 00:13:52.396 { 00:13:52.396 "name": "BaseBdev2", 00:13:52.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.396 "is_configured": false, 00:13:52.396 "data_offset": 0, 00:13:52.396 "data_size": 0 00:13:52.396 }, 00:13:52.396 { 00:13:52.396 "name": "BaseBdev3", 00:13:52.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.396 "is_configured": false, 00:13:52.396 "data_offset": 0, 00:13:52.396 "data_size": 0 00:13:52.396 } 00:13:52.396 ] 00:13:52.396 }' 00:13:52.396 06:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.396 06:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.961 [2024-12-06 06:40:11.372375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:52.961 BaseBdev2 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.961 [ 00:13:52.961 { 00:13:52.961 "name": "BaseBdev2", 00:13:52.961 "aliases": [ 00:13:52.961 "6a443215-12e3-4684-a8bd-12c0dee352c6" 00:13:52.961 ], 00:13:52.961 "product_name": "Malloc disk", 00:13:52.961 "block_size": 512, 00:13:52.961 "num_blocks": 65536, 00:13:52.961 "uuid": "6a443215-12e3-4684-a8bd-12c0dee352c6", 00:13:52.961 "assigned_rate_limits": { 00:13:52.961 "rw_ios_per_sec": 0, 00:13:52.961 "rw_mbytes_per_sec": 0, 00:13:52.961 "r_mbytes_per_sec": 0, 00:13:52.961 "w_mbytes_per_sec": 0 00:13:52.961 }, 00:13:52.961 "claimed": true, 00:13:52.961 "claim_type": "exclusive_write", 00:13:52.961 "zoned": false, 00:13:52.961 "supported_io_types": { 00:13:52.961 "read": true, 00:13:52.961 "write": true, 00:13:52.961 "unmap": true, 00:13:52.961 "flush": true, 00:13:52.961 "reset": true, 00:13:52.961 "nvme_admin": false, 00:13:52.961 "nvme_io": false, 00:13:52.961 "nvme_io_md": false, 00:13:52.961 "write_zeroes": true, 00:13:52.961 "zcopy": true, 00:13:52.961 "get_zone_info": false, 00:13:52.961 "zone_management": false, 00:13:52.961 "zone_append": false, 00:13:52.961 "compare": false, 00:13:52.961 "compare_and_write": false, 00:13:52.961 "abort": true, 00:13:52.961 "seek_hole": false, 00:13:52.961 "seek_data": false, 00:13:52.961 "copy": true, 00:13:52.961 "nvme_iov_md": false 00:13:52.961 }, 00:13:52.961 "memory_domains": [ 00:13:52.961 { 00:13:52.961 "dma_device_id": "system", 00:13:52.961 "dma_device_type": 1 00:13:52.961 }, 00:13:52.961 { 00:13:52.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.961 "dma_device_type": 2 00:13:52.961 } 00:13:52.961 ], 00:13:52.961 "driver_specific": {} 00:13:52.961 } 00:13:52.961 ] 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.961 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.961 "name": "Existed_Raid", 00:13:52.961 "uuid": "0ffc661a-22f8-407e-a169-dc5567e60db0", 00:13:52.961 "strip_size_kb": 64, 00:13:52.961 "state": "configuring", 00:13:52.961 "raid_level": "concat", 00:13:52.961 "superblock": true, 00:13:52.961 "num_base_bdevs": 3, 00:13:52.961 "num_base_bdevs_discovered": 2, 00:13:52.961 "num_base_bdevs_operational": 3, 00:13:52.961 "base_bdevs_list": [ 00:13:52.961 { 00:13:52.961 "name": "BaseBdev1", 00:13:52.961 "uuid": "2b85d563-e32d-4a48-a23e-79fc4cd3c831", 00:13:52.961 "is_configured": true, 00:13:52.961 "data_offset": 2048, 00:13:52.961 "data_size": 63488 00:13:52.961 }, 00:13:52.961 { 00:13:52.961 "name": "BaseBdev2", 00:13:52.962 "uuid": "6a443215-12e3-4684-a8bd-12c0dee352c6", 00:13:52.962 "is_configured": true, 00:13:52.962 "data_offset": 2048, 00:13:52.962 "data_size": 63488 00:13:52.962 }, 00:13:52.962 { 00:13:52.962 "name": "BaseBdev3", 00:13:52.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.962 "is_configured": false, 00:13:52.962 "data_offset": 0, 00:13:52.962 "data_size": 0 00:13:52.962 } 00:13:52.962 ] 00:13:52.962 }' 00:13:52.962 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.962 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.526 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:53.526 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.526 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.526 [2024-12-06 06:40:11.956360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:53.526 [2024-12-06 06:40:11.956686] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:53.526 [2024-12-06 06:40:11.956721] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:53.526 [2024-12-06 06:40:11.957051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:53.526 BaseBdev3 00:13:53.526 [2024-12-06 06:40:11.957292] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:53.526 [2024-12-06 06:40:11.957318] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:53.526 [2024-12-06 06:40:11.957509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.526 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.526 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:53.526 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:53.526 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:53.526 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:53.526 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:53.526 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:53.526 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:53.526 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.526 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.526 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.526 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:53.527 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.527 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.527 [ 00:13:53.527 { 00:13:53.527 "name": "BaseBdev3", 00:13:53.527 "aliases": [ 00:13:53.527 "21b374e2-da6c-471e-9eeb-09755b25400e" 00:13:53.527 ], 00:13:53.527 "product_name": "Malloc disk", 00:13:53.527 "block_size": 512, 00:13:53.527 "num_blocks": 65536, 00:13:53.527 "uuid": "21b374e2-da6c-471e-9eeb-09755b25400e", 00:13:53.527 "assigned_rate_limits": { 00:13:53.527 "rw_ios_per_sec": 0, 00:13:53.527 "rw_mbytes_per_sec": 0, 00:13:53.527 "r_mbytes_per_sec": 0, 00:13:53.527 "w_mbytes_per_sec": 0 00:13:53.527 }, 00:13:53.527 "claimed": true, 00:13:53.527 "claim_type": "exclusive_write", 00:13:53.527 "zoned": false, 00:13:53.527 "supported_io_types": { 00:13:53.527 "read": true, 00:13:53.527 "write": true, 00:13:53.527 "unmap": true, 00:13:53.527 "flush": true, 00:13:53.527 "reset": true, 00:13:53.527 "nvme_admin": false, 00:13:53.527 "nvme_io": false, 00:13:53.527 "nvme_io_md": false, 00:13:53.527 "write_zeroes": true, 00:13:53.527 "zcopy": true, 00:13:53.527 "get_zone_info": false, 00:13:53.527 "zone_management": false, 00:13:53.527 "zone_append": false, 00:13:53.527 "compare": false, 00:13:53.527 "compare_and_write": false, 00:13:53.527 "abort": true, 00:13:53.527 "seek_hole": false, 00:13:53.527 "seek_data": false, 00:13:53.527 "copy": true, 00:13:53.527 "nvme_iov_md": false 00:13:53.527 }, 00:13:53.527 "memory_domains": [ 00:13:53.527 { 00:13:53.527 "dma_device_id": "system", 00:13:53.527 "dma_device_type": 1 00:13:53.527 }, 00:13:53.527 { 00:13:53.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.527 "dma_device_type": 2 00:13:53.527 } 00:13:53.527 ], 00:13:53.527 "driver_specific": {} 00:13:53.527 } 00:13:53.527 ] 00:13:53.527 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.527 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:53.527 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:53.527 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:53.527 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:53.527 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.527 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.527 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:53.527 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.527 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.527 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.527 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.527 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.527 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.527 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.527 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.527 06:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.527 06:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.527 06:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.527 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.527 "name": "Existed_Raid", 00:13:53.527 "uuid": "0ffc661a-22f8-407e-a169-dc5567e60db0", 00:13:53.527 "strip_size_kb": 64, 00:13:53.527 "state": "online", 00:13:53.527 "raid_level": "concat", 00:13:53.527 "superblock": true, 00:13:53.527 "num_base_bdevs": 3, 00:13:53.527 "num_base_bdevs_discovered": 3, 00:13:53.527 "num_base_bdevs_operational": 3, 00:13:53.527 "base_bdevs_list": [ 00:13:53.527 { 00:13:53.527 "name": "BaseBdev1", 00:13:53.527 "uuid": "2b85d563-e32d-4a48-a23e-79fc4cd3c831", 00:13:53.527 "is_configured": true, 00:13:53.527 "data_offset": 2048, 00:13:53.527 "data_size": 63488 00:13:53.527 }, 00:13:53.527 { 00:13:53.527 "name": "BaseBdev2", 00:13:53.527 "uuid": "6a443215-12e3-4684-a8bd-12c0dee352c6", 00:13:53.527 "is_configured": true, 00:13:53.527 "data_offset": 2048, 00:13:53.527 "data_size": 63488 00:13:53.527 }, 00:13:53.527 { 00:13:53.527 "name": "BaseBdev3", 00:13:53.527 "uuid": "21b374e2-da6c-471e-9eeb-09755b25400e", 00:13:53.527 "is_configured": true, 00:13:53.527 "data_offset": 2048, 00:13:53.527 "data_size": 63488 00:13:53.527 } 00:13:53.527 ] 00:13:53.527 }' 00:13:53.527 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.527 06:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.092 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:54.092 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:54.092 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:54.092 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:54.092 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:54.092 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:54.092 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:54.092 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:54.092 06:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.092 06:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.092 [2024-12-06 06:40:12.512943] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:54.092 06:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.092 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:54.092 "name": "Existed_Raid", 00:13:54.092 "aliases": [ 00:13:54.092 "0ffc661a-22f8-407e-a169-dc5567e60db0" 00:13:54.092 ], 00:13:54.092 "product_name": "Raid Volume", 00:13:54.092 "block_size": 512, 00:13:54.092 "num_blocks": 190464, 00:13:54.092 "uuid": "0ffc661a-22f8-407e-a169-dc5567e60db0", 00:13:54.092 "assigned_rate_limits": { 00:13:54.092 "rw_ios_per_sec": 0, 00:13:54.092 "rw_mbytes_per_sec": 0, 00:13:54.092 "r_mbytes_per_sec": 0, 00:13:54.092 "w_mbytes_per_sec": 0 00:13:54.092 }, 00:13:54.092 "claimed": false, 00:13:54.092 "zoned": false, 00:13:54.092 "supported_io_types": { 00:13:54.093 "read": true, 00:13:54.093 "write": true, 00:13:54.093 "unmap": true, 00:13:54.093 "flush": true, 00:13:54.093 "reset": true, 00:13:54.093 "nvme_admin": false, 00:13:54.093 "nvme_io": false, 00:13:54.093 "nvme_io_md": false, 00:13:54.093 "write_zeroes": true, 00:13:54.093 "zcopy": false, 00:13:54.093 "get_zone_info": false, 00:13:54.093 "zone_management": false, 00:13:54.093 "zone_append": false, 00:13:54.093 "compare": false, 00:13:54.093 "compare_and_write": false, 00:13:54.093 "abort": false, 00:13:54.093 "seek_hole": false, 00:13:54.093 "seek_data": false, 00:13:54.093 "copy": false, 00:13:54.093 "nvme_iov_md": false 00:13:54.093 }, 00:13:54.093 "memory_domains": [ 00:13:54.093 { 00:13:54.093 "dma_device_id": "system", 00:13:54.093 "dma_device_type": 1 00:13:54.093 }, 00:13:54.093 { 00:13:54.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.093 "dma_device_type": 2 00:13:54.093 }, 00:13:54.093 { 00:13:54.093 "dma_device_id": "system", 00:13:54.093 "dma_device_type": 1 00:13:54.093 }, 00:13:54.093 { 00:13:54.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.093 "dma_device_type": 2 00:13:54.093 }, 00:13:54.093 { 00:13:54.093 "dma_device_id": "system", 00:13:54.093 "dma_device_type": 1 00:13:54.093 }, 00:13:54.093 { 00:13:54.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.093 "dma_device_type": 2 00:13:54.093 } 00:13:54.093 ], 00:13:54.093 "driver_specific": { 00:13:54.093 "raid": { 00:13:54.093 "uuid": "0ffc661a-22f8-407e-a169-dc5567e60db0", 00:13:54.093 "strip_size_kb": 64, 00:13:54.093 "state": "online", 00:13:54.093 "raid_level": "concat", 00:13:54.093 "superblock": true, 00:13:54.093 "num_base_bdevs": 3, 00:13:54.093 "num_base_bdevs_discovered": 3, 00:13:54.093 "num_base_bdevs_operational": 3, 00:13:54.093 "base_bdevs_list": [ 00:13:54.093 { 00:13:54.093 "name": "BaseBdev1", 00:13:54.093 "uuid": "2b85d563-e32d-4a48-a23e-79fc4cd3c831", 00:13:54.093 "is_configured": true, 00:13:54.093 "data_offset": 2048, 00:13:54.093 "data_size": 63488 00:13:54.093 }, 00:13:54.093 { 00:13:54.093 "name": "BaseBdev2", 00:13:54.093 "uuid": "6a443215-12e3-4684-a8bd-12c0dee352c6", 00:13:54.093 "is_configured": true, 00:13:54.093 "data_offset": 2048, 00:13:54.093 "data_size": 63488 00:13:54.093 }, 00:13:54.093 { 00:13:54.093 "name": "BaseBdev3", 00:13:54.093 "uuid": "21b374e2-da6c-471e-9eeb-09755b25400e", 00:13:54.093 "is_configured": true, 00:13:54.093 "data_offset": 2048, 00:13:54.093 "data_size": 63488 00:13:54.093 } 00:13:54.093 ] 00:13:54.093 } 00:13:54.093 } 00:13:54.093 }' 00:13:54.093 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:54.093 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:54.093 BaseBdev2 00:13:54.093 BaseBdev3' 00:13:54.093 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:54.093 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:54.093 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:54.093 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:54.093 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:54.093 06:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.093 06:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.093 06:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.093 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:54.093 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:54.093 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:54.093 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:54.093 06:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.093 06:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.093 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:54.093 06:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.351 [2024-12-06 06:40:12.820767] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:54.351 [2024-12-06 06:40:12.820811] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:54.351 [2024-12-06 06:40:12.820908] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.351 "name": "Existed_Raid", 00:13:54.351 "uuid": "0ffc661a-22f8-407e-a169-dc5567e60db0", 00:13:54.351 "strip_size_kb": 64, 00:13:54.351 "state": "offline", 00:13:54.351 "raid_level": "concat", 00:13:54.351 "superblock": true, 00:13:54.351 "num_base_bdevs": 3, 00:13:54.351 "num_base_bdevs_discovered": 2, 00:13:54.351 "num_base_bdevs_operational": 2, 00:13:54.351 "base_bdevs_list": [ 00:13:54.351 { 00:13:54.351 "name": null, 00:13:54.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.351 "is_configured": false, 00:13:54.351 "data_offset": 0, 00:13:54.351 "data_size": 63488 00:13:54.351 }, 00:13:54.351 { 00:13:54.351 "name": "BaseBdev2", 00:13:54.351 "uuid": "6a443215-12e3-4684-a8bd-12c0dee352c6", 00:13:54.351 "is_configured": true, 00:13:54.351 "data_offset": 2048, 00:13:54.351 "data_size": 63488 00:13:54.351 }, 00:13:54.351 { 00:13:54.351 "name": "BaseBdev3", 00:13:54.351 "uuid": "21b374e2-da6c-471e-9eeb-09755b25400e", 00:13:54.351 "is_configured": true, 00:13:54.351 "data_offset": 2048, 00:13:54.351 "data_size": 63488 00:13:54.351 } 00:13:54.351 ] 00:13:54.351 }' 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.351 06:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.939 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:54.939 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:54.939 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.939 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.939 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.939 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:54.939 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.939 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:54.939 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:54.939 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:54.939 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.939 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.939 [2024-12-06 06:40:13.468747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:54.939 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.939 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:54.939 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:54.939 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.939 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:54.939 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.939 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.939 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.197 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:55.197 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:55.197 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:55.197 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.197 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.197 [2024-12-06 06:40:13.611255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:55.197 [2024-12-06 06:40:13.611324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:55.197 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.197 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:55.197 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:55.197 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.197 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:55.197 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.197 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.197 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.197 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:55.197 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:55.197 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:55.197 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:55.197 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:55.198 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:55.198 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.198 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.198 BaseBdev2 00:13:55.198 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.198 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:55.198 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:55.198 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:55.198 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:55.198 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:55.198 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:55.198 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:55.198 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.198 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.198 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.198 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:55.198 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.198 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.198 [ 00:13:55.198 { 00:13:55.198 "name": "BaseBdev2", 00:13:55.198 "aliases": [ 00:13:55.198 "ae4af2e9-438e-4dd5-a397-2e65ece3e597" 00:13:55.198 ], 00:13:55.198 "product_name": "Malloc disk", 00:13:55.198 "block_size": 512, 00:13:55.198 "num_blocks": 65536, 00:13:55.198 "uuid": "ae4af2e9-438e-4dd5-a397-2e65ece3e597", 00:13:55.198 "assigned_rate_limits": { 00:13:55.198 "rw_ios_per_sec": 0, 00:13:55.198 "rw_mbytes_per_sec": 0, 00:13:55.198 "r_mbytes_per_sec": 0, 00:13:55.198 "w_mbytes_per_sec": 0 00:13:55.198 }, 00:13:55.198 "claimed": false, 00:13:55.198 "zoned": false, 00:13:55.198 "supported_io_types": { 00:13:55.198 "read": true, 00:13:55.198 "write": true, 00:13:55.198 "unmap": true, 00:13:55.198 "flush": true, 00:13:55.198 "reset": true, 00:13:55.198 "nvme_admin": false, 00:13:55.198 "nvme_io": false, 00:13:55.198 "nvme_io_md": false, 00:13:55.198 "write_zeroes": true, 00:13:55.198 "zcopy": true, 00:13:55.198 "get_zone_info": false, 00:13:55.198 "zone_management": false, 00:13:55.198 "zone_append": false, 00:13:55.198 "compare": false, 00:13:55.198 "compare_and_write": false, 00:13:55.198 "abort": true, 00:13:55.198 "seek_hole": false, 00:13:55.198 "seek_data": false, 00:13:55.198 "copy": true, 00:13:55.198 "nvme_iov_md": false 00:13:55.198 }, 00:13:55.198 "memory_domains": [ 00:13:55.198 { 00:13:55.198 "dma_device_id": "system", 00:13:55.198 "dma_device_type": 1 00:13:55.198 }, 00:13:55.198 { 00:13:55.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.198 "dma_device_type": 2 00:13:55.198 } 00:13:55.198 ], 00:13:55.198 "driver_specific": {} 00:13:55.198 } 00:13:55.198 ] 00:13:55.198 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.198 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:55.198 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:55.198 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:55.198 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:55.198 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.198 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.538 BaseBdev3 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.538 [ 00:13:55.538 { 00:13:55.538 "name": "BaseBdev3", 00:13:55.538 "aliases": [ 00:13:55.538 "ff2db218-280c-4c6d-bf54-7519cfa151e2" 00:13:55.538 ], 00:13:55.538 "product_name": "Malloc disk", 00:13:55.538 "block_size": 512, 00:13:55.538 "num_blocks": 65536, 00:13:55.538 "uuid": "ff2db218-280c-4c6d-bf54-7519cfa151e2", 00:13:55.538 "assigned_rate_limits": { 00:13:55.538 "rw_ios_per_sec": 0, 00:13:55.538 "rw_mbytes_per_sec": 0, 00:13:55.538 "r_mbytes_per_sec": 0, 00:13:55.538 "w_mbytes_per_sec": 0 00:13:55.538 }, 00:13:55.538 "claimed": false, 00:13:55.538 "zoned": false, 00:13:55.538 "supported_io_types": { 00:13:55.538 "read": true, 00:13:55.538 "write": true, 00:13:55.538 "unmap": true, 00:13:55.538 "flush": true, 00:13:55.538 "reset": true, 00:13:55.538 "nvme_admin": false, 00:13:55.538 "nvme_io": false, 00:13:55.538 "nvme_io_md": false, 00:13:55.538 "write_zeroes": true, 00:13:55.538 "zcopy": true, 00:13:55.538 "get_zone_info": false, 00:13:55.538 "zone_management": false, 00:13:55.538 "zone_append": false, 00:13:55.538 "compare": false, 00:13:55.538 "compare_and_write": false, 00:13:55.538 "abort": true, 00:13:55.538 "seek_hole": false, 00:13:55.538 "seek_data": false, 00:13:55.538 "copy": true, 00:13:55.538 "nvme_iov_md": false 00:13:55.538 }, 00:13:55.538 "memory_domains": [ 00:13:55.538 { 00:13:55.538 "dma_device_id": "system", 00:13:55.538 "dma_device_type": 1 00:13:55.538 }, 00:13:55.538 { 00:13:55.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.538 "dma_device_type": 2 00:13:55.538 } 00:13:55.538 ], 00:13:55.538 "driver_specific": {} 00:13:55.538 } 00:13:55.538 ] 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.538 [2024-12-06 06:40:13.896051] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:55.538 [2024-12-06 06:40:13.896103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:55.538 [2024-12-06 06:40:13.896134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:55.538 [2024-12-06 06:40:13.898566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.538 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.539 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.539 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.539 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.539 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.539 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.539 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.539 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.539 "name": "Existed_Raid", 00:13:55.539 "uuid": "7886775b-f39f-4d46-95b5-7a3efdd258dc", 00:13:55.539 "strip_size_kb": 64, 00:13:55.539 "state": "configuring", 00:13:55.539 "raid_level": "concat", 00:13:55.539 "superblock": true, 00:13:55.539 "num_base_bdevs": 3, 00:13:55.539 "num_base_bdevs_discovered": 2, 00:13:55.539 "num_base_bdevs_operational": 3, 00:13:55.539 "base_bdevs_list": [ 00:13:55.539 { 00:13:55.539 "name": "BaseBdev1", 00:13:55.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.539 "is_configured": false, 00:13:55.539 "data_offset": 0, 00:13:55.539 "data_size": 0 00:13:55.539 }, 00:13:55.539 { 00:13:55.539 "name": "BaseBdev2", 00:13:55.539 "uuid": "ae4af2e9-438e-4dd5-a397-2e65ece3e597", 00:13:55.539 "is_configured": true, 00:13:55.539 "data_offset": 2048, 00:13:55.539 "data_size": 63488 00:13:55.539 }, 00:13:55.539 { 00:13:55.539 "name": "BaseBdev3", 00:13:55.539 "uuid": "ff2db218-280c-4c6d-bf54-7519cfa151e2", 00:13:55.539 "is_configured": true, 00:13:55.539 "data_offset": 2048, 00:13:55.539 "data_size": 63488 00:13:55.539 } 00:13:55.539 ] 00:13:55.539 }' 00:13:55.539 06:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.539 06:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.804 06:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:55.804 06:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.804 06:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.804 [2024-12-06 06:40:14.400221] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:55.804 06:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.804 06:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:55.804 06:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.804 06:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.804 06:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:55.804 06:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.804 06:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.804 06:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.804 06:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.804 06:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.804 06:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.804 06:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.804 06:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.804 06:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.804 06:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.804 06:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.062 06:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.062 "name": "Existed_Raid", 00:13:56.062 "uuid": "7886775b-f39f-4d46-95b5-7a3efdd258dc", 00:13:56.062 "strip_size_kb": 64, 00:13:56.062 "state": "configuring", 00:13:56.062 "raid_level": "concat", 00:13:56.062 "superblock": true, 00:13:56.062 "num_base_bdevs": 3, 00:13:56.062 "num_base_bdevs_discovered": 1, 00:13:56.062 "num_base_bdevs_operational": 3, 00:13:56.062 "base_bdevs_list": [ 00:13:56.062 { 00:13:56.062 "name": "BaseBdev1", 00:13:56.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.062 "is_configured": false, 00:13:56.062 "data_offset": 0, 00:13:56.062 "data_size": 0 00:13:56.062 }, 00:13:56.062 { 00:13:56.062 "name": null, 00:13:56.062 "uuid": "ae4af2e9-438e-4dd5-a397-2e65ece3e597", 00:13:56.062 "is_configured": false, 00:13:56.062 "data_offset": 0, 00:13:56.062 "data_size": 63488 00:13:56.062 }, 00:13:56.062 { 00:13:56.062 "name": "BaseBdev3", 00:13:56.062 "uuid": "ff2db218-280c-4c6d-bf54-7519cfa151e2", 00:13:56.063 "is_configured": true, 00:13:56.063 "data_offset": 2048, 00:13:56.063 "data_size": 63488 00:13:56.063 } 00:13:56.063 ] 00:13:56.063 }' 00:13:56.063 06:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.063 06:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.321 06:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.321 06:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.321 06:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.321 06:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:56.321 06:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.321 06:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:56.321 06:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:56.321 06:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.321 06:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.578 [2024-12-06 06:40:14.970262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.578 BaseBdev1 00:13:56.578 06:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.578 06:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:56.578 06:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:56.578 06:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:56.578 06:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:56.578 06:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:56.578 06:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:56.578 06:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:56.578 06:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.578 06:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.578 06:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.578 06:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:56.578 06:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.578 06:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.578 [ 00:13:56.578 { 00:13:56.578 "name": "BaseBdev1", 00:13:56.578 "aliases": [ 00:13:56.578 "61ecdf17-6f46-4420-a406-1f431479f377" 00:13:56.578 ], 00:13:56.578 "product_name": "Malloc disk", 00:13:56.578 "block_size": 512, 00:13:56.578 "num_blocks": 65536, 00:13:56.578 "uuid": "61ecdf17-6f46-4420-a406-1f431479f377", 00:13:56.578 "assigned_rate_limits": { 00:13:56.578 "rw_ios_per_sec": 0, 00:13:56.578 "rw_mbytes_per_sec": 0, 00:13:56.578 "r_mbytes_per_sec": 0, 00:13:56.578 "w_mbytes_per_sec": 0 00:13:56.578 }, 00:13:56.578 "claimed": true, 00:13:56.578 "claim_type": "exclusive_write", 00:13:56.578 "zoned": false, 00:13:56.578 "supported_io_types": { 00:13:56.578 "read": true, 00:13:56.578 "write": true, 00:13:56.578 "unmap": true, 00:13:56.578 "flush": true, 00:13:56.578 "reset": true, 00:13:56.578 "nvme_admin": false, 00:13:56.578 "nvme_io": false, 00:13:56.578 "nvme_io_md": false, 00:13:56.578 "write_zeroes": true, 00:13:56.578 "zcopy": true, 00:13:56.578 "get_zone_info": false, 00:13:56.578 "zone_management": false, 00:13:56.578 "zone_append": false, 00:13:56.578 "compare": false, 00:13:56.578 "compare_and_write": false, 00:13:56.578 "abort": true, 00:13:56.578 "seek_hole": false, 00:13:56.578 "seek_data": false, 00:13:56.578 "copy": true, 00:13:56.578 "nvme_iov_md": false 00:13:56.578 }, 00:13:56.578 "memory_domains": [ 00:13:56.578 { 00:13:56.578 "dma_device_id": "system", 00:13:56.578 "dma_device_type": 1 00:13:56.578 }, 00:13:56.578 { 00:13:56.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.578 "dma_device_type": 2 00:13:56.578 } 00:13:56.578 ], 00:13:56.578 "driver_specific": {} 00:13:56.578 } 00:13:56.578 ] 00:13:56.578 06:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.578 06:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:56.578 06:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:56.578 06:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.578 06:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.579 06:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:56.579 06:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.579 06:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.579 06:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.579 06:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.579 06:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.579 06:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.579 06:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.579 06:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.579 06:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.579 06:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.579 06:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.579 06:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.579 "name": "Existed_Raid", 00:13:56.579 "uuid": "7886775b-f39f-4d46-95b5-7a3efdd258dc", 00:13:56.579 "strip_size_kb": 64, 00:13:56.579 "state": "configuring", 00:13:56.579 "raid_level": "concat", 00:13:56.579 "superblock": true, 00:13:56.579 "num_base_bdevs": 3, 00:13:56.579 "num_base_bdevs_discovered": 2, 00:13:56.579 "num_base_bdevs_operational": 3, 00:13:56.579 "base_bdevs_list": [ 00:13:56.579 { 00:13:56.579 "name": "BaseBdev1", 00:13:56.579 "uuid": "61ecdf17-6f46-4420-a406-1f431479f377", 00:13:56.579 "is_configured": true, 00:13:56.579 "data_offset": 2048, 00:13:56.579 "data_size": 63488 00:13:56.579 }, 00:13:56.579 { 00:13:56.579 "name": null, 00:13:56.579 "uuid": "ae4af2e9-438e-4dd5-a397-2e65ece3e597", 00:13:56.579 "is_configured": false, 00:13:56.579 "data_offset": 0, 00:13:56.579 "data_size": 63488 00:13:56.579 }, 00:13:56.579 { 00:13:56.579 "name": "BaseBdev3", 00:13:56.579 "uuid": "ff2db218-280c-4c6d-bf54-7519cfa151e2", 00:13:56.579 "is_configured": true, 00:13:56.579 "data_offset": 2048, 00:13:56.579 "data_size": 63488 00:13:56.579 } 00:13:56.579 ] 00:13:56.579 }' 00:13:56.579 06:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.579 06:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.143 06:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.143 06:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:57.143 06:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.143 06:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.143 06:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.143 06:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:57.143 06:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:57.143 06:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.143 06:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.143 [2024-12-06 06:40:15.534476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:57.143 06:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.143 06:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:57.143 06:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.143 06:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.143 06:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:57.143 06:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.143 06:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:57.143 06:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.143 06:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.143 06:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.143 06:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.143 06:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.143 06:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.143 06:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.143 06:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.143 06:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.143 06:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.143 "name": "Existed_Raid", 00:13:57.143 "uuid": "7886775b-f39f-4d46-95b5-7a3efdd258dc", 00:13:57.143 "strip_size_kb": 64, 00:13:57.143 "state": "configuring", 00:13:57.143 "raid_level": "concat", 00:13:57.143 "superblock": true, 00:13:57.143 "num_base_bdevs": 3, 00:13:57.143 "num_base_bdevs_discovered": 1, 00:13:57.143 "num_base_bdevs_operational": 3, 00:13:57.143 "base_bdevs_list": [ 00:13:57.143 { 00:13:57.143 "name": "BaseBdev1", 00:13:57.143 "uuid": "61ecdf17-6f46-4420-a406-1f431479f377", 00:13:57.143 "is_configured": true, 00:13:57.143 "data_offset": 2048, 00:13:57.143 "data_size": 63488 00:13:57.143 }, 00:13:57.143 { 00:13:57.143 "name": null, 00:13:57.143 "uuid": "ae4af2e9-438e-4dd5-a397-2e65ece3e597", 00:13:57.143 "is_configured": false, 00:13:57.143 "data_offset": 0, 00:13:57.143 "data_size": 63488 00:13:57.143 }, 00:13:57.143 { 00:13:57.143 "name": null, 00:13:57.143 "uuid": "ff2db218-280c-4c6d-bf54-7519cfa151e2", 00:13:57.143 "is_configured": false, 00:13:57.143 "data_offset": 0, 00:13:57.143 "data_size": 63488 00:13:57.143 } 00:13:57.143 ] 00:13:57.143 }' 00:13:57.143 06:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.143 06:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.401 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.401 06:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.401 06:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.401 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:57.402 06:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.660 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:57.660 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:57.660 06:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.660 06:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.660 [2024-12-06 06:40:16.078678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:57.660 06:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.660 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:57.660 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.660 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.660 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:57.660 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.660 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:57.660 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.660 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.660 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.660 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.660 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.660 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.660 06:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.660 06:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.660 06:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.660 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.660 "name": "Existed_Raid", 00:13:57.660 "uuid": "7886775b-f39f-4d46-95b5-7a3efdd258dc", 00:13:57.660 "strip_size_kb": 64, 00:13:57.660 "state": "configuring", 00:13:57.660 "raid_level": "concat", 00:13:57.660 "superblock": true, 00:13:57.660 "num_base_bdevs": 3, 00:13:57.660 "num_base_bdevs_discovered": 2, 00:13:57.660 "num_base_bdevs_operational": 3, 00:13:57.660 "base_bdevs_list": [ 00:13:57.660 { 00:13:57.660 "name": "BaseBdev1", 00:13:57.660 "uuid": "61ecdf17-6f46-4420-a406-1f431479f377", 00:13:57.660 "is_configured": true, 00:13:57.660 "data_offset": 2048, 00:13:57.660 "data_size": 63488 00:13:57.660 }, 00:13:57.660 { 00:13:57.660 "name": null, 00:13:57.660 "uuid": "ae4af2e9-438e-4dd5-a397-2e65ece3e597", 00:13:57.660 "is_configured": false, 00:13:57.660 "data_offset": 0, 00:13:57.660 "data_size": 63488 00:13:57.660 }, 00:13:57.660 { 00:13:57.660 "name": "BaseBdev3", 00:13:57.660 "uuid": "ff2db218-280c-4c6d-bf54-7519cfa151e2", 00:13:57.660 "is_configured": true, 00:13:57.660 "data_offset": 2048, 00:13:57.660 "data_size": 63488 00:13:57.660 } 00:13:57.660 ] 00:13:57.660 }' 00:13:57.660 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.660 06:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.227 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:58.227 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.227 06:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.227 06:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.227 06:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.227 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:58.227 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:58.227 06:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.227 06:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.227 [2024-12-06 06:40:16.690877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:58.227 06:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.227 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:58.227 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.227 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.227 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:58.227 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.227 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:58.227 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.227 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.227 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.227 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.227 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.227 06:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.227 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.227 06:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.227 06:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.227 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.227 "name": "Existed_Raid", 00:13:58.227 "uuid": "7886775b-f39f-4d46-95b5-7a3efdd258dc", 00:13:58.227 "strip_size_kb": 64, 00:13:58.227 "state": "configuring", 00:13:58.227 "raid_level": "concat", 00:13:58.227 "superblock": true, 00:13:58.227 "num_base_bdevs": 3, 00:13:58.227 "num_base_bdevs_discovered": 1, 00:13:58.227 "num_base_bdevs_operational": 3, 00:13:58.227 "base_bdevs_list": [ 00:13:58.227 { 00:13:58.227 "name": null, 00:13:58.227 "uuid": "61ecdf17-6f46-4420-a406-1f431479f377", 00:13:58.227 "is_configured": false, 00:13:58.227 "data_offset": 0, 00:13:58.227 "data_size": 63488 00:13:58.227 }, 00:13:58.227 { 00:13:58.227 "name": null, 00:13:58.227 "uuid": "ae4af2e9-438e-4dd5-a397-2e65ece3e597", 00:13:58.227 "is_configured": false, 00:13:58.227 "data_offset": 0, 00:13:58.227 "data_size": 63488 00:13:58.227 }, 00:13:58.227 { 00:13:58.227 "name": "BaseBdev3", 00:13:58.227 "uuid": "ff2db218-280c-4c6d-bf54-7519cfa151e2", 00:13:58.227 "is_configured": true, 00:13:58.227 "data_offset": 2048, 00:13:58.227 "data_size": 63488 00:13:58.227 } 00:13:58.227 ] 00:13:58.227 }' 00:13:58.227 06:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.227 06:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.795 06:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.795 06:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:58.795 06:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.795 06:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.795 06:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.795 06:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:58.795 06:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:58.795 06:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.795 06:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.795 [2024-12-06 06:40:17.361648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:58.795 06:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.795 06:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:58.795 06:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.795 06:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.795 06:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:58.795 06:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.795 06:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:58.795 06:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.795 06:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.795 06:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.795 06:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.795 06:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.795 06:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.795 06:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.795 06:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.795 06:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.795 06:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.795 "name": "Existed_Raid", 00:13:58.795 "uuid": "7886775b-f39f-4d46-95b5-7a3efdd258dc", 00:13:58.795 "strip_size_kb": 64, 00:13:58.795 "state": "configuring", 00:13:58.795 "raid_level": "concat", 00:13:58.795 "superblock": true, 00:13:58.795 "num_base_bdevs": 3, 00:13:58.795 "num_base_bdevs_discovered": 2, 00:13:58.795 "num_base_bdevs_operational": 3, 00:13:58.795 "base_bdevs_list": [ 00:13:58.795 { 00:13:58.795 "name": null, 00:13:58.795 "uuid": "61ecdf17-6f46-4420-a406-1f431479f377", 00:13:58.795 "is_configured": false, 00:13:58.795 "data_offset": 0, 00:13:58.795 "data_size": 63488 00:13:58.795 }, 00:13:58.795 { 00:13:58.795 "name": "BaseBdev2", 00:13:58.795 "uuid": "ae4af2e9-438e-4dd5-a397-2e65ece3e597", 00:13:58.795 "is_configured": true, 00:13:58.795 "data_offset": 2048, 00:13:58.795 "data_size": 63488 00:13:58.795 }, 00:13:58.795 { 00:13:58.795 "name": "BaseBdev3", 00:13:58.795 "uuid": "ff2db218-280c-4c6d-bf54-7519cfa151e2", 00:13:58.795 "is_configured": true, 00:13:58.795 "data_offset": 2048, 00:13:58.795 "data_size": 63488 00:13:58.795 } 00:13:58.795 ] 00:13:58.795 }' 00:13:58.795 06:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.795 06:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.362 06:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.362 06:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:59.362 06:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.362 06:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.362 06:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.362 06:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:59.362 06:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:59.362 06:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.362 06:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.362 06:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.362 06:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.362 06:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 61ecdf17-6f46-4420-a406-1f431479f377 00:13:59.362 06:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.362 06:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.620 [2024-12-06 06:40:18.021490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:59.620 [2024-12-06 06:40:18.021844] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:59.620 [2024-12-06 06:40:18.021869] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:59.620 [2024-12-06 06:40:18.022222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:59.620 [2024-12-06 06:40:18.022416] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:59.621 [2024-12-06 06:40:18.022439] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:59.621 [2024-12-06 06:40:18.022633] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.621 NewBaseBdev 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.621 [ 00:13:59.621 { 00:13:59.621 "name": "NewBaseBdev", 00:13:59.621 "aliases": [ 00:13:59.621 "61ecdf17-6f46-4420-a406-1f431479f377" 00:13:59.621 ], 00:13:59.621 "product_name": "Malloc disk", 00:13:59.621 "block_size": 512, 00:13:59.621 "num_blocks": 65536, 00:13:59.621 "uuid": "61ecdf17-6f46-4420-a406-1f431479f377", 00:13:59.621 "assigned_rate_limits": { 00:13:59.621 "rw_ios_per_sec": 0, 00:13:59.621 "rw_mbytes_per_sec": 0, 00:13:59.621 "r_mbytes_per_sec": 0, 00:13:59.621 "w_mbytes_per_sec": 0 00:13:59.621 }, 00:13:59.621 "claimed": true, 00:13:59.621 "claim_type": "exclusive_write", 00:13:59.621 "zoned": false, 00:13:59.621 "supported_io_types": { 00:13:59.621 "read": true, 00:13:59.621 "write": true, 00:13:59.621 "unmap": true, 00:13:59.621 "flush": true, 00:13:59.621 "reset": true, 00:13:59.621 "nvme_admin": false, 00:13:59.621 "nvme_io": false, 00:13:59.621 "nvme_io_md": false, 00:13:59.621 "write_zeroes": true, 00:13:59.621 "zcopy": true, 00:13:59.621 "get_zone_info": false, 00:13:59.621 "zone_management": false, 00:13:59.621 "zone_append": false, 00:13:59.621 "compare": false, 00:13:59.621 "compare_and_write": false, 00:13:59.621 "abort": true, 00:13:59.621 "seek_hole": false, 00:13:59.621 "seek_data": false, 00:13:59.621 "copy": true, 00:13:59.621 "nvme_iov_md": false 00:13:59.621 }, 00:13:59.621 "memory_domains": [ 00:13:59.621 { 00:13:59.621 "dma_device_id": "system", 00:13:59.621 "dma_device_type": 1 00:13:59.621 }, 00:13:59.621 { 00:13:59.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.621 "dma_device_type": 2 00:13:59.621 } 00:13:59.621 ], 00:13:59.621 "driver_specific": {} 00:13:59.621 } 00:13:59.621 ] 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.621 "name": "Existed_Raid", 00:13:59.621 "uuid": "7886775b-f39f-4d46-95b5-7a3efdd258dc", 00:13:59.621 "strip_size_kb": 64, 00:13:59.621 "state": "online", 00:13:59.621 "raid_level": "concat", 00:13:59.621 "superblock": true, 00:13:59.621 "num_base_bdevs": 3, 00:13:59.621 "num_base_bdevs_discovered": 3, 00:13:59.621 "num_base_bdevs_operational": 3, 00:13:59.621 "base_bdevs_list": [ 00:13:59.621 { 00:13:59.621 "name": "NewBaseBdev", 00:13:59.621 "uuid": "61ecdf17-6f46-4420-a406-1f431479f377", 00:13:59.621 "is_configured": true, 00:13:59.621 "data_offset": 2048, 00:13:59.621 "data_size": 63488 00:13:59.621 }, 00:13:59.621 { 00:13:59.621 "name": "BaseBdev2", 00:13:59.621 "uuid": "ae4af2e9-438e-4dd5-a397-2e65ece3e597", 00:13:59.621 "is_configured": true, 00:13:59.621 "data_offset": 2048, 00:13:59.621 "data_size": 63488 00:13:59.621 }, 00:13:59.621 { 00:13:59.621 "name": "BaseBdev3", 00:13:59.621 "uuid": "ff2db218-280c-4c6d-bf54-7519cfa151e2", 00:13:59.621 "is_configured": true, 00:13:59.621 "data_offset": 2048, 00:13:59.621 "data_size": 63488 00:13:59.621 } 00:13:59.621 ] 00:13:59.621 }' 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.621 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.201 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:00.201 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:00.201 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:00.201 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:00.201 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:00.201 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:00.201 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:00.201 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:00.201 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.201 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.201 [2024-12-06 06:40:18.558096] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:00.201 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.201 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:00.201 "name": "Existed_Raid", 00:14:00.201 "aliases": [ 00:14:00.201 "7886775b-f39f-4d46-95b5-7a3efdd258dc" 00:14:00.201 ], 00:14:00.201 "product_name": "Raid Volume", 00:14:00.201 "block_size": 512, 00:14:00.201 "num_blocks": 190464, 00:14:00.201 "uuid": "7886775b-f39f-4d46-95b5-7a3efdd258dc", 00:14:00.201 "assigned_rate_limits": { 00:14:00.201 "rw_ios_per_sec": 0, 00:14:00.201 "rw_mbytes_per_sec": 0, 00:14:00.201 "r_mbytes_per_sec": 0, 00:14:00.201 "w_mbytes_per_sec": 0 00:14:00.201 }, 00:14:00.201 "claimed": false, 00:14:00.201 "zoned": false, 00:14:00.201 "supported_io_types": { 00:14:00.201 "read": true, 00:14:00.201 "write": true, 00:14:00.201 "unmap": true, 00:14:00.201 "flush": true, 00:14:00.201 "reset": true, 00:14:00.201 "nvme_admin": false, 00:14:00.201 "nvme_io": false, 00:14:00.201 "nvme_io_md": false, 00:14:00.201 "write_zeroes": true, 00:14:00.201 "zcopy": false, 00:14:00.201 "get_zone_info": false, 00:14:00.201 "zone_management": false, 00:14:00.201 "zone_append": false, 00:14:00.201 "compare": false, 00:14:00.201 "compare_and_write": false, 00:14:00.201 "abort": false, 00:14:00.201 "seek_hole": false, 00:14:00.201 "seek_data": false, 00:14:00.201 "copy": false, 00:14:00.201 "nvme_iov_md": false 00:14:00.201 }, 00:14:00.201 "memory_domains": [ 00:14:00.201 { 00:14:00.201 "dma_device_id": "system", 00:14:00.201 "dma_device_type": 1 00:14:00.201 }, 00:14:00.201 { 00:14:00.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.201 "dma_device_type": 2 00:14:00.201 }, 00:14:00.201 { 00:14:00.201 "dma_device_id": "system", 00:14:00.201 "dma_device_type": 1 00:14:00.201 }, 00:14:00.201 { 00:14:00.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.201 "dma_device_type": 2 00:14:00.201 }, 00:14:00.201 { 00:14:00.201 "dma_device_id": "system", 00:14:00.201 "dma_device_type": 1 00:14:00.201 }, 00:14:00.201 { 00:14:00.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.201 "dma_device_type": 2 00:14:00.201 } 00:14:00.201 ], 00:14:00.201 "driver_specific": { 00:14:00.201 "raid": { 00:14:00.201 "uuid": "7886775b-f39f-4d46-95b5-7a3efdd258dc", 00:14:00.201 "strip_size_kb": 64, 00:14:00.201 "state": "online", 00:14:00.201 "raid_level": "concat", 00:14:00.201 "superblock": true, 00:14:00.201 "num_base_bdevs": 3, 00:14:00.201 "num_base_bdevs_discovered": 3, 00:14:00.201 "num_base_bdevs_operational": 3, 00:14:00.201 "base_bdevs_list": [ 00:14:00.201 { 00:14:00.201 "name": "NewBaseBdev", 00:14:00.201 "uuid": "61ecdf17-6f46-4420-a406-1f431479f377", 00:14:00.201 "is_configured": true, 00:14:00.201 "data_offset": 2048, 00:14:00.201 "data_size": 63488 00:14:00.201 }, 00:14:00.201 { 00:14:00.201 "name": "BaseBdev2", 00:14:00.201 "uuid": "ae4af2e9-438e-4dd5-a397-2e65ece3e597", 00:14:00.201 "is_configured": true, 00:14:00.201 "data_offset": 2048, 00:14:00.201 "data_size": 63488 00:14:00.201 }, 00:14:00.201 { 00:14:00.201 "name": "BaseBdev3", 00:14:00.201 "uuid": "ff2db218-280c-4c6d-bf54-7519cfa151e2", 00:14:00.201 "is_configured": true, 00:14:00.201 "data_offset": 2048, 00:14:00.201 "data_size": 63488 00:14:00.201 } 00:14:00.201 ] 00:14:00.201 } 00:14:00.201 } 00:14:00.201 }' 00:14:00.201 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:00.201 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:00.201 BaseBdev2 00:14:00.201 BaseBdev3' 00:14:00.201 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.202 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:00.202 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.202 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:00.202 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.202 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.202 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.202 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.202 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.202 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.202 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.202 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:00.202 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.202 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.202 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.202 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.202 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.202 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.202 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.202 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:00.202 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.202 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.202 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.202 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.459 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.459 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.459 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:00.459 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.459 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.459 [2024-12-06 06:40:18.877783] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:00.459 [2024-12-06 06:40:18.877816] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:00.459 [2024-12-06 06:40:18.877916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:00.459 [2024-12-06 06:40:18.877987] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:00.459 [2024-12-06 06:40:18.878007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:00.459 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.459 06:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66460 00:14:00.459 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66460 ']' 00:14:00.459 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66460 00:14:00.459 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:00.459 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:00.459 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66460 00:14:00.459 killing process with pid 66460 00:14:00.459 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:00.459 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:00.459 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66460' 00:14:00.459 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66460 00:14:00.459 [2024-12-06 06:40:18.915954] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:00.459 06:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66460 00:14:00.717 [2024-12-06 06:40:19.189729] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:01.651 ************************************ 00:14:01.651 END TEST raid_state_function_test_sb 00:14:01.651 ************************************ 00:14:01.651 06:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:01.651 00:14:01.651 real 0m11.706s 00:14:01.651 user 0m19.450s 00:14:01.651 sys 0m1.575s 00:14:01.651 06:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:01.651 06:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.651 06:40:20 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:14:01.651 06:40:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:01.651 06:40:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:01.651 06:40:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:01.958 ************************************ 00:14:01.958 START TEST raid_superblock_test 00:14:01.958 ************************************ 00:14:01.958 06:40:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:14:01.958 06:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:14:01.958 06:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:01.958 06:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:01.958 06:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:01.958 06:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:01.958 06:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:01.959 06:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:01.959 06:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:01.959 06:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:01.959 06:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:01.959 06:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:01.959 06:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:01.959 06:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:01.959 06:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:14:01.959 06:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:01.959 06:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:01.959 06:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67092 00:14:01.959 06:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:01.959 06:40:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67092 00:14:01.959 06:40:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 67092 ']' 00:14:01.959 06:40:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.959 06:40:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:01.959 06:40:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.959 06:40:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:01.959 06:40:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.959 [2024-12-06 06:40:20.404617] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:14:01.959 [2024-12-06 06:40:20.405097] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67092 ] 00:14:01.959 [2024-12-06 06:40:20.592466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.232 [2024-12-06 06:40:20.736674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.490 [2024-12-06 06:40:20.944238] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:02.490 [2024-12-06 06:40:20.944302] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:02.747 06:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:02.747 06:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:02.747 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:02.747 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:02.747 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:02.747 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:02.747 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:02.747 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:02.747 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:02.747 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:02.747 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:02.747 06:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.747 06:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.005 malloc1 00:14:03.005 06:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.005 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:03.005 06:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.005 06:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.005 [2024-12-06 06:40:21.437381] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:03.005 [2024-12-06 06:40:21.437625] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.005 [2024-12-06 06:40:21.437819] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:03.005 [2024-12-06 06:40:21.437985] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.005 [2024-12-06 06:40:21.440982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.005 [2024-12-06 06:40:21.441161] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:03.005 pt1 00:14:03.005 06:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.005 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:03.005 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:03.005 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:03.005 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:03.005 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:03.005 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:03.005 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.006 malloc2 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.006 [2024-12-06 06:40:21.489861] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:03.006 [2024-12-06 06:40:21.490076] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.006 [2024-12-06 06:40:21.490149] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:03.006 [2024-12-06 06:40:21.490184] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.006 [2024-12-06 06:40:21.493053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.006 [2024-12-06 06:40:21.493226] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:03.006 pt2 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.006 malloc3 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.006 [2024-12-06 06:40:21.550155] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:03.006 [2024-12-06 06:40:21.550374] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.006 [2024-12-06 06:40:21.550444] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:03.006 [2024-12-06 06:40:21.550476] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.006 [2024-12-06 06:40:21.553493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.006 [2024-12-06 06:40:21.553692] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:03.006 pt3 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.006 [2024-12-06 06:40:21.562204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:03.006 [2024-12-06 06:40:21.564755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:03.006 [2024-12-06 06:40:21.565027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:03.006 [2024-12-06 06:40:21.565317] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:03.006 [2024-12-06 06:40:21.565343] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:03.006 [2024-12-06 06:40:21.565740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:03.006 [2024-12-06 06:40:21.565967] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:03.006 [2024-12-06 06:40:21.565984] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:03.006 [2024-12-06 06:40:21.566330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.006 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.006 "name": "raid_bdev1", 00:14:03.006 "uuid": "61be7ad6-6f0a-43ca-af44-4ff54c36debd", 00:14:03.006 "strip_size_kb": 64, 00:14:03.006 "state": "online", 00:14:03.006 "raid_level": "concat", 00:14:03.006 "superblock": true, 00:14:03.006 "num_base_bdevs": 3, 00:14:03.006 "num_base_bdevs_discovered": 3, 00:14:03.006 "num_base_bdevs_operational": 3, 00:14:03.006 "base_bdevs_list": [ 00:14:03.006 { 00:14:03.006 "name": "pt1", 00:14:03.006 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:03.006 "is_configured": true, 00:14:03.006 "data_offset": 2048, 00:14:03.006 "data_size": 63488 00:14:03.006 }, 00:14:03.006 { 00:14:03.006 "name": "pt2", 00:14:03.006 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:03.006 "is_configured": true, 00:14:03.006 "data_offset": 2048, 00:14:03.006 "data_size": 63488 00:14:03.006 }, 00:14:03.006 { 00:14:03.007 "name": "pt3", 00:14:03.007 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:03.007 "is_configured": true, 00:14:03.007 "data_offset": 2048, 00:14:03.007 "data_size": 63488 00:14:03.007 } 00:14:03.007 ] 00:14:03.007 }' 00:14:03.007 06:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.007 06:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.573 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:03.573 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:03.573 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:03.573 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:03.573 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:03.573 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:03.573 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:03.573 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:03.573 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.573 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.573 [2024-12-06 06:40:22.050769] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:03.573 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.573 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:03.573 "name": "raid_bdev1", 00:14:03.573 "aliases": [ 00:14:03.573 "61be7ad6-6f0a-43ca-af44-4ff54c36debd" 00:14:03.573 ], 00:14:03.573 "product_name": "Raid Volume", 00:14:03.573 "block_size": 512, 00:14:03.573 "num_blocks": 190464, 00:14:03.573 "uuid": "61be7ad6-6f0a-43ca-af44-4ff54c36debd", 00:14:03.573 "assigned_rate_limits": { 00:14:03.573 "rw_ios_per_sec": 0, 00:14:03.573 "rw_mbytes_per_sec": 0, 00:14:03.573 "r_mbytes_per_sec": 0, 00:14:03.573 "w_mbytes_per_sec": 0 00:14:03.573 }, 00:14:03.573 "claimed": false, 00:14:03.573 "zoned": false, 00:14:03.573 "supported_io_types": { 00:14:03.573 "read": true, 00:14:03.573 "write": true, 00:14:03.573 "unmap": true, 00:14:03.573 "flush": true, 00:14:03.573 "reset": true, 00:14:03.573 "nvme_admin": false, 00:14:03.573 "nvme_io": false, 00:14:03.573 "nvme_io_md": false, 00:14:03.573 "write_zeroes": true, 00:14:03.573 "zcopy": false, 00:14:03.573 "get_zone_info": false, 00:14:03.573 "zone_management": false, 00:14:03.573 "zone_append": false, 00:14:03.573 "compare": false, 00:14:03.573 "compare_and_write": false, 00:14:03.573 "abort": false, 00:14:03.573 "seek_hole": false, 00:14:03.573 "seek_data": false, 00:14:03.573 "copy": false, 00:14:03.573 "nvme_iov_md": false 00:14:03.573 }, 00:14:03.573 "memory_domains": [ 00:14:03.573 { 00:14:03.573 "dma_device_id": "system", 00:14:03.573 "dma_device_type": 1 00:14:03.573 }, 00:14:03.573 { 00:14:03.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.573 "dma_device_type": 2 00:14:03.573 }, 00:14:03.573 { 00:14:03.573 "dma_device_id": "system", 00:14:03.573 "dma_device_type": 1 00:14:03.573 }, 00:14:03.573 { 00:14:03.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.573 "dma_device_type": 2 00:14:03.573 }, 00:14:03.573 { 00:14:03.573 "dma_device_id": "system", 00:14:03.573 "dma_device_type": 1 00:14:03.573 }, 00:14:03.573 { 00:14:03.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.573 "dma_device_type": 2 00:14:03.573 } 00:14:03.573 ], 00:14:03.573 "driver_specific": { 00:14:03.573 "raid": { 00:14:03.573 "uuid": "61be7ad6-6f0a-43ca-af44-4ff54c36debd", 00:14:03.573 "strip_size_kb": 64, 00:14:03.573 "state": "online", 00:14:03.573 "raid_level": "concat", 00:14:03.573 "superblock": true, 00:14:03.573 "num_base_bdevs": 3, 00:14:03.573 "num_base_bdevs_discovered": 3, 00:14:03.573 "num_base_bdevs_operational": 3, 00:14:03.573 "base_bdevs_list": [ 00:14:03.573 { 00:14:03.573 "name": "pt1", 00:14:03.573 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:03.573 "is_configured": true, 00:14:03.573 "data_offset": 2048, 00:14:03.573 "data_size": 63488 00:14:03.573 }, 00:14:03.573 { 00:14:03.573 "name": "pt2", 00:14:03.574 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:03.574 "is_configured": true, 00:14:03.574 "data_offset": 2048, 00:14:03.574 "data_size": 63488 00:14:03.574 }, 00:14:03.574 { 00:14:03.574 "name": "pt3", 00:14:03.574 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:03.574 "is_configured": true, 00:14:03.574 "data_offset": 2048, 00:14:03.574 "data_size": 63488 00:14:03.574 } 00:14:03.574 ] 00:14:03.574 } 00:14:03.574 } 00:14:03.574 }' 00:14:03.574 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:03.574 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:03.574 pt2 00:14:03.574 pt3' 00:14:03.574 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.574 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:03.574 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.574 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:03.574 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.574 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.574 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.574 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.832 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.832 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.832 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.832 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:03.832 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.832 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.832 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.832 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.832 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.832 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.832 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.832 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:03.832 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.832 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.832 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.832 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.832 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.832 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.832 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:03.832 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.832 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.832 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:03.833 [2024-12-06 06:40:22.334799] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=61be7ad6-6f0a-43ca-af44-4ff54c36debd 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 61be7ad6-6f0a-43ca-af44-4ff54c36debd ']' 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.833 [2024-12-06 06:40:22.386410] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:03.833 [2024-12-06 06:40:22.386612] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:03.833 [2024-12-06 06:40:22.386765] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.833 [2024-12-06 06:40:22.386892] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:03.833 [2024-12-06 06:40:22.386927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.833 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.091 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.091 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:04.091 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:04.091 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:04.091 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:04.091 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:04.091 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:04.091 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:04.091 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:04.091 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:04.091 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.091 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.091 [2024-12-06 06:40:22.526529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:04.091 [2024-12-06 06:40:22.529411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:04.091 [2024-12-06 06:40:22.529487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:04.091 [2024-12-06 06:40:22.529589] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:04.091 [2024-12-06 06:40:22.529668] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:04.091 [2024-12-06 06:40:22.529702] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:04.091 [2024-12-06 06:40:22.529729] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:04.091 [2024-12-06 06:40:22.529742] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:04.091 request: 00:14:04.091 { 00:14:04.091 "name": "raid_bdev1", 00:14:04.091 "raid_level": "concat", 00:14:04.091 "base_bdevs": [ 00:14:04.091 "malloc1", 00:14:04.091 "malloc2", 00:14:04.091 "malloc3" 00:14:04.091 ], 00:14:04.091 "strip_size_kb": 64, 00:14:04.091 "superblock": false, 00:14:04.091 "method": "bdev_raid_create", 00:14:04.091 "req_id": 1 00:14:04.091 } 00:14:04.091 Got JSON-RPC error response 00:14:04.091 response: 00:14:04.091 { 00:14:04.091 "code": -17, 00:14:04.092 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:04.092 } 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.092 [2024-12-06 06:40:22.590583] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:04.092 [2024-12-06 06:40:22.590783] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.092 [2024-12-06 06:40:22.590889] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:04.092 [2024-12-06 06:40:22.591061] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.092 [2024-12-06 06:40:22.594170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.092 [2024-12-06 06:40:22.594343] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:04.092 [2024-12-06 06:40:22.594670] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:04.092 [2024-12-06 06:40:22.594894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:04.092 pt1 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.092 "name": "raid_bdev1", 00:14:04.092 "uuid": "61be7ad6-6f0a-43ca-af44-4ff54c36debd", 00:14:04.092 "strip_size_kb": 64, 00:14:04.092 "state": "configuring", 00:14:04.092 "raid_level": "concat", 00:14:04.092 "superblock": true, 00:14:04.092 "num_base_bdevs": 3, 00:14:04.092 "num_base_bdevs_discovered": 1, 00:14:04.092 "num_base_bdevs_operational": 3, 00:14:04.092 "base_bdevs_list": [ 00:14:04.092 { 00:14:04.092 "name": "pt1", 00:14:04.092 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:04.092 "is_configured": true, 00:14:04.092 "data_offset": 2048, 00:14:04.092 "data_size": 63488 00:14:04.092 }, 00:14:04.092 { 00:14:04.092 "name": null, 00:14:04.092 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:04.092 "is_configured": false, 00:14:04.092 "data_offset": 2048, 00:14:04.092 "data_size": 63488 00:14:04.092 }, 00:14:04.092 { 00:14:04.092 "name": null, 00:14:04.092 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:04.092 "is_configured": false, 00:14:04.092 "data_offset": 2048, 00:14:04.092 "data_size": 63488 00:14:04.092 } 00:14:04.092 ] 00:14:04.092 }' 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.092 06:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.658 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:04.658 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:04.658 06:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.658 06:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.658 [2024-12-06 06:40:23.102909] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:04.658 [2024-12-06 06:40:23.102997] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.658 [2024-12-06 06:40:23.103039] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:04.658 [2024-12-06 06:40:23.103055] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.658 [2024-12-06 06:40:23.103631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.658 [2024-12-06 06:40:23.103664] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:04.658 [2024-12-06 06:40:23.103801] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:04.658 [2024-12-06 06:40:23.103874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:04.658 pt2 00:14:04.658 06:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.658 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:04.658 06:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.658 06:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.658 [2024-12-06 06:40:23.110885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:04.658 06:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.658 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:14:04.658 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.658 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.658 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:04.658 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.658 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.658 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.658 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.658 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.658 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.658 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.658 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.658 06:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.658 06:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.658 06:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.658 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.658 "name": "raid_bdev1", 00:14:04.658 "uuid": "61be7ad6-6f0a-43ca-af44-4ff54c36debd", 00:14:04.658 "strip_size_kb": 64, 00:14:04.658 "state": "configuring", 00:14:04.658 "raid_level": "concat", 00:14:04.658 "superblock": true, 00:14:04.658 "num_base_bdevs": 3, 00:14:04.658 "num_base_bdevs_discovered": 1, 00:14:04.659 "num_base_bdevs_operational": 3, 00:14:04.659 "base_bdevs_list": [ 00:14:04.659 { 00:14:04.659 "name": "pt1", 00:14:04.659 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:04.659 "is_configured": true, 00:14:04.659 "data_offset": 2048, 00:14:04.659 "data_size": 63488 00:14:04.659 }, 00:14:04.659 { 00:14:04.659 "name": null, 00:14:04.659 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:04.659 "is_configured": false, 00:14:04.659 "data_offset": 0, 00:14:04.659 "data_size": 63488 00:14:04.659 }, 00:14:04.659 { 00:14:04.659 "name": null, 00:14:04.659 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:04.659 "is_configured": false, 00:14:04.659 "data_offset": 2048, 00:14:04.659 "data_size": 63488 00:14:04.659 } 00:14:04.659 ] 00:14:04.659 }' 00:14:04.659 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.659 06:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.225 [2024-12-06 06:40:23.639021] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:05.225 [2024-12-06 06:40:23.639109] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.225 [2024-12-06 06:40:23.639140] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:05.225 [2024-12-06 06:40:23.639158] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.225 [2024-12-06 06:40:23.640068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.225 [2024-12-06 06:40:23.640110] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:05.225 [2024-12-06 06:40:23.640216] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:05.225 [2024-12-06 06:40:23.640254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:05.225 pt2 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.225 [2024-12-06 06:40:23.646988] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:05.225 [2024-12-06 06:40:23.647188] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.225 [2024-12-06 06:40:23.647239] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:05.225 [2024-12-06 06:40:23.647277] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.225 [2024-12-06 06:40:23.647792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.225 [2024-12-06 06:40:23.647837] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:05.225 [2024-12-06 06:40:23.647917] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:05.225 [2024-12-06 06:40:23.647951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:05.225 [2024-12-06 06:40:23.648099] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:05.225 [2024-12-06 06:40:23.648123] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:05.225 [2024-12-06 06:40:23.648552] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:05.225 [2024-12-06 06:40:23.648761] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:05.225 [2024-12-06 06:40:23.648776] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:05.225 [2024-12-06 06:40:23.648946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.225 pt3 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.225 "name": "raid_bdev1", 00:14:05.225 "uuid": "61be7ad6-6f0a-43ca-af44-4ff54c36debd", 00:14:05.225 "strip_size_kb": 64, 00:14:05.225 "state": "online", 00:14:05.225 "raid_level": "concat", 00:14:05.225 "superblock": true, 00:14:05.225 "num_base_bdevs": 3, 00:14:05.225 "num_base_bdevs_discovered": 3, 00:14:05.225 "num_base_bdevs_operational": 3, 00:14:05.225 "base_bdevs_list": [ 00:14:05.225 { 00:14:05.225 "name": "pt1", 00:14:05.225 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:05.225 "is_configured": true, 00:14:05.225 "data_offset": 2048, 00:14:05.225 "data_size": 63488 00:14:05.225 }, 00:14:05.225 { 00:14:05.225 "name": "pt2", 00:14:05.225 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:05.225 "is_configured": true, 00:14:05.225 "data_offset": 2048, 00:14:05.225 "data_size": 63488 00:14:05.225 }, 00:14:05.225 { 00:14:05.225 "name": "pt3", 00:14:05.225 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:05.225 "is_configured": true, 00:14:05.225 "data_offset": 2048, 00:14:05.225 "data_size": 63488 00:14:05.225 } 00:14:05.225 ] 00:14:05.225 }' 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.225 06:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.793 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:05.793 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:05.793 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:05.793 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:05.793 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:05.793 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:05.793 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:05.793 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:05.793 06:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.793 06:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.793 [2024-12-06 06:40:24.175581] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:05.793 06:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.793 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:05.793 "name": "raid_bdev1", 00:14:05.793 "aliases": [ 00:14:05.793 "61be7ad6-6f0a-43ca-af44-4ff54c36debd" 00:14:05.793 ], 00:14:05.793 "product_name": "Raid Volume", 00:14:05.793 "block_size": 512, 00:14:05.793 "num_blocks": 190464, 00:14:05.793 "uuid": "61be7ad6-6f0a-43ca-af44-4ff54c36debd", 00:14:05.793 "assigned_rate_limits": { 00:14:05.793 "rw_ios_per_sec": 0, 00:14:05.793 "rw_mbytes_per_sec": 0, 00:14:05.793 "r_mbytes_per_sec": 0, 00:14:05.793 "w_mbytes_per_sec": 0 00:14:05.793 }, 00:14:05.793 "claimed": false, 00:14:05.793 "zoned": false, 00:14:05.793 "supported_io_types": { 00:14:05.793 "read": true, 00:14:05.793 "write": true, 00:14:05.793 "unmap": true, 00:14:05.793 "flush": true, 00:14:05.793 "reset": true, 00:14:05.793 "nvme_admin": false, 00:14:05.793 "nvme_io": false, 00:14:05.793 "nvme_io_md": false, 00:14:05.793 "write_zeroes": true, 00:14:05.793 "zcopy": false, 00:14:05.793 "get_zone_info": false, 00:14:05.793 "zone_management": false, 00:14:05.793 "zone_append": false, 00:14:05.793 "compare": false, 00:14:05.793 "compare_and_write": false, 00:14:05.793 "abort": false, 00:14:05.793 "seek_hole": false, 00:14:05.793 "seek_data": false, 00:14:05.793 "copy": false, 00:14:05.793 "nvme_iov_md": false 00:14:05.793 }, 00:14:05.793 "memory_domains": [ 00:14:05.793 { 00:14:05.793 "dma_device_id": "system", 00:14:05.793 "dma_device_type": 1 00:14:05.793 }, 00:14:05.793 { 00:14:05.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.793 "dma_device_type": 2 00:14:05.793 }, 00:14:05.793 { 00:14:05.793 "dma_device_id": "system", 00:14:05.793 "dma_device_type": 1 00:14:05.793 }, 00:14:05.793 { 00:14:05.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.793 "dma_device_type": 2 00:14:05.793 }, 00:14:05.793 { 00:14:05.793 "dma_device_id": "system", 00:14:05.793 "dma_device_type": 1 00:14:05.793 }, 00:14:05.793 { 00:14:05.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.793 "dma_device_type": 2 00:14:05.793 } 00:14:05.793 ], 00:14:05.793 "driver_specific": { 00:14:05.793 "raid": { 00:14:05.793 "uuid": "61be7ad6-6f0a-43ca-af44-4ff54c36debd", 00:14:05.793 "strip_size_kb": 64, 00:14:05.793 "state": "online", 00:14:05.793 "raid_level": "concat", 00:14:05.793 "superblock": true, 00:14:05.793 "num_base_bdevs": 3, 00:14:05.793 "num_base_bdevs_discovered": 3, 00:14:05.793 "num_base_bdevs_operational": 3, 00:14:05.793 "base_bdevs_list": [ 00:14:05.793 { 00:14:05.793 "name": "pt1", 00:14:05.793 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:05.793 "is_configured": true, 00:14:05.793 "data_offset": 2048, 00:14:05.793 "data_size": 63488 00:14:05.793 }, 00:14:05.793 { 00:14:05.793 "name": "pt2", 00:14:05.793 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:05.793 "is_configured": true, 00:14:05.793 "data_offset": 2048, 00:14:05.793 "data_size": 63488 00:14:05.793 }, 00:14:05.793 { 00:14:05.793 "name": "pt3", 00:14:05.793 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:05.793 "is_configured": true, 00:14:05.793 "data_offset": 2048, 00:14:05.793 "data_size": 63488 00:14:05.793 } 00:14:05.793 ] 00:14:05.793 } 00:14:05.793 } 00:14:05.793 }' 00:14:05.793 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:05.793 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:05.793 pt2 00:14:05.793 pt3' 00:14:05.793 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.793 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:05.793 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.793 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:05.793 06:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.793 06:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.793 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.793 06:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.793 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.793 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.793 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.793 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.793 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:05.793 06:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.793 06:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.793 06:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.052 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.052 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.052 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.052 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:06.052 06:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.052 06:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.052 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.052 06:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.052 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.052 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.052 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:06.052 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:06.052 06:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.052 06:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.052 [2024-12-06 06:40:24.507658] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:06.052 06:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.052 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 61be7ad6-6f0a-43ca-af44-4ff54c36debd '!=' 61be7ad6-6f0a-43ca-af44-4ff54c36debd ']' 00:14:06.052 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:14:06.052 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:06.052 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:06.052 06:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67092 00:14:06.052 06:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 67092 ']' 00:14:06.052 06:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 67092 00:14:06.052 06:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:06.052 06:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:06.052 06:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67092 00:14:06.052 06:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:06.052 06:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:06.052 killing process with pid 67092 00:14:06.052 06:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67092' 00:14:06.052 06:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 67092 00:14:06.052 06:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 67092 00:14:06.052 [2024-12-06 06:40:24.576373] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:06.052 [2024-12-06 06:40:24.576566] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.052 [2024-12-06 06:40:24.576697] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:06.052 [2024-12-06 06:40:24.576732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:06.311 [2024-12-06 06:40:24.849148] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:07.246 06:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:07.246 ************************************ 00:14:07.246 END TEST raid_superblock_test 00:14:07.246 ************************************ 00:14:07.246 00:14:07.246 real 0m5.588s 00:14:07.246 user 0m8.400s 00:14:07.246 sys 0m0.800s 00:14:07.246 06:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:07.246 06:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.504 06:40:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:14:07.504 06:40:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:07.504 06:40:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:07.504 06:40:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:07.504 ************************************ 00:14:07.504 START TEST raid_read_error_test 00:14:07.504 ************************************ 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Ljq5ceoBBw 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67345 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67345 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:07.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67345 ']' 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:07.504 06:40:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.504 [2024-12-06 06:40:26.052371] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:14:07.504 [2024-12-06 06:40:26.052570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67345 ] 00:14:07.763 [2024-12-06 06:40:26.240393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.763 [2024-12-06 06:40:26.394938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.021 [2024-12-06 06:40:26.619561] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:08.021 [2024-12-06 06:40:26.619642] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.588 BaseBdev1_malloc 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.588 true 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.588 [2024-12-06 06:40:27.129188] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:08.588 [2024-12-06 06:40:27.129279] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.588 [2024-12-06 06:40:27.129310] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:08.588 [2024-12-06 06:40:27.129329] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.588 [2024-12-06 06:40:27.132224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.588 [2024-12-06 06:40:27.132274] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:08.588 BaseBdev1 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.588 BaseBdev2_malloc 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.588 true 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.588 [2024-12-06 06:40:27.185666] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:08.588 [2024-12-06 06:40:27.185734] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.588 [2024-12-06 06:40:27.185761] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:08.588 [2024-12-06 06:40:27.185780] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.588 [2024-12-06 06:40:27.188645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.588 [2024-12-06 06:40:27.188695] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:08.588 BaseBdev2 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.588 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.847 BaseBdev3_malloc 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.847 true 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.847 [2024-12-06 06:40:27.253057] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:08.847 [2024-12-06 06:40:27.253121] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.847 [2024-12-06 06:40:27.253149] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:08.847 [2024-12-06 06:40:27.253168] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.847 [2024-12-06 06:40:27.256054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.847 [2024-12-06 06:40:27.256102] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:08.847 BaseBdev3 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.847 [2024-12-06 06:40:27.261169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:08.847 [2024-12-06 06:40:27.263663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:08.847 [2024-12-06 06:40:27.263779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:08.847 [2024-12-06 06:40:27.264167] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:08.847 [2024-12-06 06:40:27.264210] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:08.847 [2024-12-06 06:40:27.264639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:14:08.847 [2024-12-06 06:40:27.264944] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:08.847 [2024-12-06 06:40:27.264999] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:08.847 [2024-12-06 06:40:27.265358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.847 "name": "raid_bdev1", 00:14:08.847 "uuid": "e7e65f04-d4cd-4b63-8b3e-596befda92a7", 00:14:08.847 "strip_size_kb": 64, 00:14:08.847 "state": "online", 00:14:08.847 "raid_level": "concat", 00:14:08.847 "superblock": true, 00:14:08.847 "num_base_bdevs": 3, 00:14:08.847 "num_base_bdevs_discovered": 3, 00:14:08.847 "num_base_bdevs_operational": 3, 00:14:08.847 "base_bdevs_list": [ 00:14:08.847 { 00:14:08.847 "name": "BaseBdev1", 00:14:08.847 "uuid": "5bb50e59-2691-5b63-a746-f1ee6ac15ce4", 00:14:08.847 "is_configured": true, 00:14:08.847 "data_offset": 2048, 00:14:08.847 "data_size": 63488 00:14:08.847 }, 00:14:08.847 { 00:14:08.847 "name": "BaseBdev2", 00:14:08.847 "uuid": "4115e490-b7ce-5ac0-ac71-d72a9fa08364", 00:14:08.847 "is_configured": true, 00:14:08.847 "data_offset": 2048, 00:14:08.847 "data_size": 63488 00:14:08.847 }, 00:14:08.847 { 00:14:08.847 "name": "BaseBdev3", 00:14:08.847 "uuid": "9f156e87-892b-510d-b836-fce601813ddb", 00:14:08.847 "is_configured": true, 00:14:08.847 "data_offset": 2048, 00:14:08.847 "data_size": 63488 00:14:08.847 } 00:14:08.847 ] 00:14:08.847 }' 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.847 06:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.413 06:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:09.413 06:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:09.413 [2024-12-06 06:40:27.922822] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:14:10.408 06:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:10.408 06:40:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.408 06:40:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.408 06:40:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.408 06:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:10.408 06:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:10.408 06:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:14:10.408 06:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:10.408 06:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.408 06:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.408 06:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:10.408 06:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.408 06:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.408 06:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.408 06:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.408 06:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.408 06:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.408 06:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.408 06:40:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.408 06:40:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.408 06:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.408 06:40:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.408 06:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.408 "name": "raid_bdev1", 00:14:10.408 "uuid": "e7e65f04-d4cd-4b63-8b3e-596befda92a7", 00:14:10.408 "strip_size_kb": 64, 00:14:10.408 "state": "online", 00:14:10.408 "raid_level": "concat", 00:14:10.408 "superblock": true, 00:14:10.408 "num_base_bdevs": 3, 00:14:10.408 "num_base_bdevs_discovered": 3, 00:14:10.408 "num_base_bdevs_operational": 3, 00:14:10.408 "base_bdevs_list": [ 00:14:10.408 { 00:14:10.408 "name": "BaseBdev1", 00:14:10.408 "uuid": "5bb50e59-2691-5b63-a746-f1ee6ac15ce4", 00:14:10.408 "is_configured": true, 00:14:10.408 "data_offset": 2048, 00:14:10.408 "data_size": 63488 00:14:10.408 }, 00:14:10.408 { 00:14:10.408 "name": "BaseBdev2", 00:14:10.408 "uuid": "4115e490-b7ce-5ac0-ac71-d72a9fa08364", 00:14:10.408 "is_configured": true, 00:14:10.408 "data_offset": 2048, 00:14:10.408 "data_size": 63488 00:14:10.408 }, 00:14:10.408 { 00:14:10.408 "name": "BaseBdev3", 00:14:10.408 "uuid": "9f156e87-892b-510d-b836-fce601813ddb", 00:14:10.408 "is_configured": true, 00:14:10.408 "data_offset": 2048, 00:14:10.408 "data_size": 63488 00:14:10.408 } 00:14:10.408 ] 00:14:10.408 }' 00:14:10.408 06:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.408 06:40:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.667 06:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:10.667 06:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.667 06:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.667 [2024-12-06 06:40:29.306481] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:10.667 [2024-12-06 06:40:29.306540] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:10.667 [2024-12-06 06:40:29.309988] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:10.667 [2024-12-06 06:40:29.310057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.667 [2024-12-06 06:40:29.310113] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:10.667 [2024-12-06 06:40:29.310131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:10.667 { 00:14:10.667 "results": [ 00:14:10.667 { 00:14:10.667 "job": "raid_bdev1", 00:14:10.667 "core_mask": "0x1", 00:14:10.667 "workload": "randrw", 00:14:10.667 "percentage": 50, 00:14:10.667 "status": "finished", 00:14:10.667 "queue_depth": 1, 00:14:10.667 "io_size": 131072, 00:14:10.667 "runtime": 1.381334, 00:14:10.667 "iops": 10295.120513937976, 00:14:10.667 "mibps": 1286.890064242247, 00:14:10.667 "io_failed": 1, 00:14:10.667 "io_timeout": 0, 00:14:10.667 "avg_latency_us": 135.08545071016735, 00:14:10.667 "min_latency_us": 44.45090909090909, 00:14:10.667 "max_latency_us": 1869.2654545454545 00:14:10.667 } 00:14:10.667 ], 00:14:10.667 "core_count": 1 00:14:10.667 } 00:14:10.667 06:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.925 06:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67345 00:14:10.925 06:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67345 ']' 00:14:10.925 06:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67345 00:14:10.925 06:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:14:10.925 06:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:10.925 06:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67345 00:14:10.925 06:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:10.925 06:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:10.925 killing process with pid 67345 00:14:10.925 06:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67345' 00:14:10.925 06:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67345 00:14:10.925 [2024-12-06 06:40:29.342674] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:10.925 06:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67345 00:14:10.925 [2024-12-06 06:40:29.552517] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:12.298 06:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Ljq5ceoBBw 00:14:12.298 06:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:12.298 06:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:12.298 06:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:14:12.298 06:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:12.298 06:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:12.298 06:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:12.298 06:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:14:12.298 00:14:12.298 real 0m4.733s 00:14:12.298 user 0m5.912s 00:14:12.298 sys 0m0.593s 00:14:12.298 06:40:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.298 06:40:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.298 ************************************ 00:14:12.298 END TEST raid_read_error_test 00:14:12.298 ************************************ 00:14:12.298 06:40:30 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:14:12.298 06:40:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:12.298 06:40:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.298 06:40:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:12.298 ************************************ 00:14:12.298 START TEST raid_write_error_test 00:14:12.298 ************************************ 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.XRf2L8fr6B 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67496 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67496 00:14:12.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67496 ']' 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:12.298 06:40:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.298 [2024-12-06 06:40:30.833410] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:14:12.298 [2024-12-06 06:40:30.833850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67496 ] 00:14:12.556 [2024-12-06 06:40:31.019480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.556 [2024-12-06 06:40:31.153571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.813 [2024-12-06 06:40:31.357478] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.813 [2024-12-06 06:40:31.357731] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.379 BaseBdev1_malloc 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.379 true 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.379 [2024-12-06 06:40:31.803730] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:13.379 [2024-12-06 06:40:31.803802] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.379 [2024-12-06 06:40:31.803834] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:13.379 [2024-12-06 06:40:31.803852] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.379 [2024-12-06 06:40:31.806648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.379 [2024-12-06 06:40:31.806702] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:13.379 BaseBdev1 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.379 BaseBdev2_malloc 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.379 true 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.379 [2024-12-06 06:40:31.867564] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:13.379 [2024-12-06 06:40:31.867766] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.379 [2024-12-06 06:40:31.867803] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:13.379 [2024-12-06 06:40:31.867822] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.379 [2024-12-06 06:40:31.870671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.379 [2024-12-06 06:40:31.870724] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:13.379 BaseBdev2 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.379 BaseBdev3_malloc 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.379 true 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.379 [2024-12-06 06:40:31.940410] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:13.379 [2024-12-06 06:40:31.940481] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.379 [2024-12-06 06:40:31.940510] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:13.379 [2024-12-06 06:40:31.940541] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.379 [2024-12-06 06:40:31.943358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.379 [2024-12-06 06:40:31.943422] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:13.379 BaseBdev3 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.379 [2024-12-06 06:40:31.952538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:13.379 [2024-12-06 06:40:31.954978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:13.379 [2024-12-06 06:40:31.955086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:13.379 [2024-12-06 06:40:31.955361] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:13.379 [2024-12-06 06:40:31.955381] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:13.379 [2024-12-06 06:40:31.955729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:14:13.379 [2024-12-06 06:40:31.955945] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:13.379 [2024-12-06 06:40:31.955977] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:13.379 [2024-12-06 06:40:31.956164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:13.379 06:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.380 06:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.380 06:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:13.380 06:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.380 06:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.380 06:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.380 06:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.380 06:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.380 06:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.380 06:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.380 06:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.380 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.380 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.380 06:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.380 06:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.380 "name": "raid_bdev1", 00:14:13.380 "uuid": "72f8d871-08d4-4ffa-8ed0-725ad952570a", 00:14:13.380 "strip_size_kb": 64, 00:14:13.380 "state": "online", 00:14:13.380 "raid_level": "concat", 00:14:13.380 "superblock": true, 00:14:13.380 "num_base_bdevs": 3, 00:14:13.380 "num_base_bdevs_discovered": 3, 00:14:13.380 "num_base_bdevs_operational": 3, 00:14:13.380 "base_bdevs_list": [ 00:14:13.380 { 00:14:13.380 "name": "BaseBdev1", 00:14:13.380 "uuid": "065d5383-f766-5c44-b11c-c50cd13c5c96", 00:14:13.380 "is_configured": true, 00:14:13.380 "data_offset": 2048, 00:14:13.380 "data_size": 63488 00:14:13.380 }, 00:14:13.380 { 00:14:13.380 "name": "BaseBdev2", 00:14:13.380 "uuid": "02b098ec-9354-5e4e-99e2-f6a06712fbe3", 00:14:13.380 "is_configured": true, 00:14:13.380 "data_offset": 2048, 00:14:13.380 "data_size": 63488 00:14:13.380 }, 00:14:13.380 { 00:14:13.380 "name": "BaseBdev3", 00:14:13.380 "uuid": "ca1a7812-b403-5c44-9e06-41400470b4c9", 00:14:13.380 "is_configured": true, 00:14:13.380 "data_offset": 2048, 00:14:13.380 "data_size": 63488 00:14:13.380 } 00:14:13.380 ] 00:14:13.380 }' 00:14:13.380 06:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.380 06:40:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.947 06:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:13.947 06:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:13.947 [2024-12-06 06:40:32.590336] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:14:14.930 06:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:14.930 06:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.930 06:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.930 06:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.930 06:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:14.930 06:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:14.930 06:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:14:14.930 06:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:14.930 06:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.930 06:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.930 06:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:14.930 06:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.930 06:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.930 06:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.930 06:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.930 06:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.930 06:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.930 06:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.930 06:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.930 06:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.930 06:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.930 06:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.930 06:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.930 "name": "raid_bdev1", 00:14:14.930 "uuid": "72f8d871-08d4-4ffa-8ed0-725ad952570a", 00:14:14.930 "strip_size_kb": 64, 00:14:14.930 "state": "online", 00:14:14.930 "raid_level": "concat", 00:14:14.930 "superblock": true, 00:14:14.930 "num_base_bdevs": 3, 00:14:14.930 "num_base_bdevs_discovered": 3, 00:14:14.930 "num_base_bdevs_operational": 3, 00:14:14.930 "base_bdevs_list": [ 00:14:14.930 { 00:14:14.930 "name": "BaseBdev1", 00:14:14.930 "uuid": "065d5383-f766-5c44-b11c-c50cd13c5c96", 00:14:14.930 "is_configured": true, 00:14:14.930 "data_offset": 2048, 00:14:14.930 "data_size": 63488 00:14:14.930 }, 00:14:14.930 { 00:14:14.930 "name": "BaseBdev2", 00:14:14.930 "uuid": "02b098ec-9354-5e4e-99e2-f6a06712fbe3", 00:14:14.930 "is_configured": true, 00:14:14.930 "data_offset": 2048, 00:14:14.930 "data_size": 63488 00:14:14.930 }, 00:14:14.930 { 00:14:14.930 "name": "BaseBdev3", 00:14:14.930 "uuid": "ca1a7812-b403-5c44-9e06-41400470b4c9", 00:14:14.930 "is_configured": true, 00:14:14.930 "data_offset": 2048, 00:14:14.930 "data_size": 63488 00:14:14.930 } 00:14:14.930 ] 00:14:14.930 }' 00:14:14.930 06:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.930 06:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.496 06:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:15.496 06:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.496 06:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.496 [2024-12-06 06:40:34.033794] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:15.496 [2024-12-06 06:40:34.034109] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:15.496 { 00:14:15.496 "results": [ 00:14:15.496 { 00:14:15.496 "job": "raid_bdev1", 00:14:15.496 "core_mask": "0x1", 00:14:15.496 "workload": "randrw", 00:14:15.496 "percentage": 50, 00:14:15.496 "status": "finished", 00:14:15.496 "queue_depth": 1, 00:14:15.496 "io_size": 131072, 00:14:15.496 "runtime": 1.441069, 00:14:15.496 "iops": 9581.081821897495, 00:14:15.496 "mibps": 1197.6352277371868, 00:14:15.496 "io_failed": 1, 00:14:15.496 "io_timeout": 0, 00:14:15.496 "avg_latency_us": 146.32714631833983, 00:14:15.496 "min_latency_us": 44.45090909090909, 00:14:15.496 "max_latency_us": 1861.8181818181818 00:14:15.496 } 00:14:15.496 ], 00:14:15.496 "core_count": 1 00:14:15.496 } 00:14:15.496 [2024-12-06 06:40:34.037729] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:15.496 [2024-12-06 06:40:34.037861] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.496 [2024-12-06 06:40:34.037927] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:15.496 [2024-12-06 06:40:34.037943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:15.496 06:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.496 06:40:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67496 00:14:15.496 06:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67496 ']' 00:14:15.496 06:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67496 00:14:15.496 06:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:14:15.496 06:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:15.496 06:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67496 00:14:15.496 06:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:15.496 killing process with pid 67496 00:14:15.496 06:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:15.496 06:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67496' 00:14:15.496 06:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67496 00:14:15.496 [2024-12-06 06:40:34.076545] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:15.496 06:40:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67496 00:14:15.754 [2024-12-06 06:40:34.298607] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:17.128 06:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:17.128 06:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:17.128 06:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.XRf2L8fr6B 00:14:17.128 ************************************ 00:14:17.128 END TEST raid_write_error_test 00:14:17.128 ************************************ 00:14:17.128 06:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:14:17.128 06:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:17.128 06:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:17.128 06:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:17.128 06:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:14:17.128 00:14:17.128 real 0m4.785s 00:14:17.128 user 0m5.883s 00:14:17.128 sys 0m0.547s 00:14:17.128 06:40:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:17.128 06:40:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.128 06:40:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:17.128 06:40:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:14:17.128 06:40:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:17.128 06:40:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:17.128 06:40:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:17.128 ************************************ 00:14:17.128 START TEST raid_state_function_test 00:14:17.128 ************************************ 00:14:17.128 06:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:14:17.128 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:17.128 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:17.128 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:17.128 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:17.128 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:17.128 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:17.128 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:17.128 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:17.129 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:17.129 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:17.129 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:17.129 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:17.129 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:17.129 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:17.129 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:17.129 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:17.129 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:17.129 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:17.129 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:17.129 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:17.129 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:17.129 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:17.129 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:17.129 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:17.129 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:17.129 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67640 00:14:17.129 Process raid pid: 67640 00:14:17.129 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67640' 00:14:17.129 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67640 00:14:17.129 06:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:17.129 06:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67640 ']' 00:14:17.129 06:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.129 06:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:17.129 06:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.129 06:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:17.129 06:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.129 [2024-12-06 06:40:35.654137] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:14:17.129 [2024-12-06 06:40:35.654310] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.388 [2024-12-06 06:40:35.831257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.388 [2024-12-06 06:40:35.986868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.645 [2024-12-06 06:40:36.223523] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:17.645 [2024-12-06 06:40:36.223603] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:18.211 06:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:18.211 06:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:18.211 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:18.211 06:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.211 06:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.211 [2024-12-06 06:40:36.731364] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:18.211 [2024-12-06 06:40:36.731473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:18.211 [2024-12-06 06:40:36.731491] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:18.211 [2024-12-06 06:40:36.731509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:18.211 [2024-12-06 06:40:36.731520] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:18.212 [2024-12-06 06:40:36.731565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:18.212 06:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.212 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:18.212 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.212 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.212 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.212 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.212 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:18.212 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.212 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.212 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.212 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.212 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.212 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.212 06:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.212 06:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.212 06:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.212 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.212 "name": "Existed_Raid", 00:14:18.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.212 "strip_size_kb": 0, 00:14:18.212 "state": "configuring", 00:14:18.212 "raid_level": "raid1", 00:14:18.212 "superblock": false, 00:14:18.212 "num_base_bdevs": 3, 00:14:18.212 "num_base_bdevs_discovered": 0, 00:14:18.212 "num_base_bdevs_operational": 3, 00:14:18.212 "base_bdevs_list": [ 00:14:18.212 { 00:14:18.212 "name": "BaseBdev1", 00:14:18.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.212 "is_configured": false, 00:14:18.212 "data_offset": 0, 00:14:18.212 "data_size": 0 00:14:18.212 }, 00:14:18.212 { 00:14:18.212 "name": "BaseBdev2", 00:14:18.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.212 "is_configured": false, 00:14:18.212 "data_offset": 0, 00:14:18.212 "data_size": 0 00:14:18.212 }, 00:14:18.212 { 00:14:18.212 "name": "BaseBdev3", 00:14:18.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.212 "is_configured": false, 00:14:18.212 "data_offset": 0, 00:14:18.212 "data_size": 0 00:14:18.212 } 00:14:18.212 ] 00:14:18.212 }' 00:14:18.212 06:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.212 06:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.777 [2024-12-06 06:40:37.223490] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:18.777 [2024-12-06 06:40:37.223583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.777 [2024-12-06 06:40:37.231398] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:18.777 [2024-12-06 06:40:37.231475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:18.777 [2024-12-06 06:40:37.231490] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:18.777 [2024-12-06 06:40:37.231506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:18.777 [2024-12-06 06:40:37.231516] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:18.777 [2024-12-06 06:40:37.231550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.777 [2024-12-06 06:40:37.282894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:18.777 BaseBdev1 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.777 [ 00:14:18.777 { 00:14:18.777 "name": "BaseBdev1", 00:14:18.777 "aliases": [ 00:14:18.777 "ff878109-174b-4a79-b688-ba99e86d624e" 00:14:18.777 ], 00:14:18.777 "product_name": "Malloc disk", 00:14:18.777 "block_size": 512, 00:14:18.777 "num_blocks": 65536, 00:14:18.777 "uuid": "ff878109-174b-4a79-b688-ba99e86d624e", 00:14:18.777 "assigned_rate_limits": { 00:14:18.777 "rw_ios_per_sec": 0, 00:14:18.777 "rw_mbytes_per_sec": 0, 00:14:18.777 "r_mbytes_per_sec": 0, 00:14:18.777 "w_mbytes_per_sec": 0 00:14:18.777 }, 00:14:18.777 "claimed": true, 00:14:18.777 "claim_type": "exclusive_write", 00:14:18.777 "zoned": false, 00:14:18.777 "supported_io_types": { 00:14:18.777 "read": true, 00:14:18.777 "write": true, 00:14:18.777 "unmap": true, 00:14:18.777 "flush": true, 00:14:18.777 "reset": true, 00:14:18.777 "nvme_admin": false, 00:14:18.777 "nvme_io": false, 00:14:18.777 "nvme_io_md": false, 00:14:18.777 "write_zeroes": true, 00:14:18.777 "zcopy": true, 00:14:18.777 "get_zone_info": false, 00:14:18.777 "zone_management": false, 00:14:18.777 "zone_append": false, 00:14:18.777 "compare": false, 00:14:18.777 "compare_and_write": false, 00:14:18.777 "abort": true, 00:14:18.777 "seek_hole": false, 00:14:18.777 "seek_data": false, 00:14:18.777 "copy": true, 00:14:18.777 "nvme_iov_md": false 00:14:18.777 }, 00:14:18.777 "memory_domains": [ 00:14:18.777 { 00:14:18.777 "dma_device_id": "system", 00:14:18.777 "dma_device_type": 1 00:14:18.777 }, 00:14:18.777 { 00:14:18.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.777 "dma_device_type": 2 00:14:18.777 } 00:14:18.777 ], 00:14:18.777 "driver_specific": {} 00:14:18.777 } 00:14:18.777 ] 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.777 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.777 "name": "Existed_Raid", 00:14:18.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.777 "strip_size_kb": 0, 00:14:18.777 "state": "configuring", 00:14:18.777 "raid_level": "raid1", 00:14:18.777 "superblock": false, 00:14:18.777 "num_base_bdevs": 3, 00:14:18.777 "num_base_bdevs_discovered": 1, 00:14:18.778 "num_base_bdevs_operational": 3, 00:14:18.778 "base_bdevs_list": [ 00:14:18.778 { 00:14:18.778 "name": "BaseBdev1", 00:14:18.778 "uuid": "ff878109-174b-4a79-b688-ba99e86d624e", 00:14:18.778 "is_configured": true, 00:14:18.778 "data_offset": 0, 00:14:18.778 "data_size": 65536 00:14:18.778 }, 00:14:18.778 { 00:14:18.778 "name": "BaseBdev2", 00:14:18.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.778 "is_configured": false, 00:14:18.778 "data_offset": 0, 00:14:18.778 "data_size": 0 00:14:18.778 }, 00:14:18.778 { 00:14:18.778 "name": "BaseBdev3", 00:14:18.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.778 "is_configured": false, 00:14:18.778 "data_offset": 0, 00:14:18.778 "data_size": 0 00:14:18.778 } 00:14:18.778 ] 00:14:18.778 }' 00:14:18.778 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.778 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.344 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:19.344 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.344 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.344 [2024-12-06 06:40:37.823132] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:19.344 [2024-12-06 06:40:37.823250] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:19.344 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.344 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:19.344 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.344 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.344 [2024-12-06 06:40:37.831079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:19.344 [2024-12-06 06:40:37.833744] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:19.344 [2024-12-06 06:40:37.833799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:19.344 [2024-12-06 06:40:37.833815] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:19.344 [2024-12-06 06:40:37.833830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:19.344 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.344 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:19.344 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:19.344 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:19.344 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.344 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.344 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.344 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.344 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.344 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.344 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.344 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.344 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.344 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.344 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.344 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.345 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.345 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.345 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.345 "name": "Existed_Raid", 00:14:19.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.345 "strip_size_kb": 0, 00:14:19.345 "state": "configuring", 00:14:19.345 "raid_level": "raid1", 00:14:19.345 "superblock": false, 00:14:19.345 "num_base_bdevs": 3, 00:14:19.345 "num_base_bdevs_discovered": 1, 00:14:19.345 "num_base_bdevs_operational": 3, 00:14:19.345 "base_bdevs_list": [ 00:14:19.345 { 00:14:19.345 "name": "BaseBdev1", 00:14:19.345 "uuid": "ff878109-174b-4a79-b688-ba99e86d624e", 00:14:19.345 "is_configured": true, 00:14:19.345 "data_offset": 0, 00:14:19.345 "data_size": 65536 00:14:19.345 }, 00:14:19.345 { 00:14:19.345 "name": "BaseBdev2", 00:14:19.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.345 "is_configured": false, 00:14:19.345 "data_offset": 0, 00:14:19.345 "data_size": 0 00:14:19.345 }, 00:14:19.345 { 00:14:19.345 "name": "BaseBdev3", 00:14:19.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.345 "is_configured": false, 00:14:19.345 "data_offset": 0, 00:14:19.345 "data_size": 0 00:14:19.345 } 00:14:19.345 ] 00:14:19.345 }' 00:14:19.345 06:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.345 06:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.911 [2024-12-06 06:40:38.357971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:19.911 BaseBdev2 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.911 [ 00:14:19.911 { 00:14:19.911 "name": "BaseBdev2", 00:14:19.911 "aliases": [ 00:14:19.911 "d01fd8ea-35a9-4f95-afd9-ed4b6a87d922" 00:14:19.911 ], 00:14:19.911 "product_name": "Malloc disk", 00:14:19.911 "block_size": 512, 00:14:19.911 "num_blocks": 65536, 00:14:19.911 "uuid": "d01fd8ea-35a9-4f95-afd9-ed4b6a87d922", 00:14:19.911 "assigned_rate_limits": { 00:14:19.911 "rw_ios_per_sec": 0, 00:14:19.911 "rw_mbytes_per_sec": 0, 00:14:19.911 "r_mbytes_per_sec": 0, 00:14:19.911 "w_mbytes_per_sec": 0 00:14:19.911 }, 00:14:19.911 "claimed": true, 00:14:19.911 "claim_type": "exclusive_write", 00:14:19.911 "zoned": false, 00:14:19.911 "supported_io_types": { 00:14:19.911 "read": true, 00:14:19.911 "write": true, 00:14:19.911 "unmap": true, 00:14:19.911 "flush": true, 00:14:19.911 "reset": true, 00:14:19.911 "nvme_admin": false, 00:14:19.911 "nvme_io": false, 00:14:19.911 "nvme_io_md": false, 00:14:19.911 "write_zeroes": true, 00:14:19.911 "zcopy": true, 00:14:19.911 "get_zone_info": false, 00:14:19.911 "zone_management": false, 00:14:19.911 "zone_append": false, 00:14:19.911 "compare": false, 00:14:19.911 "compare_and_write": false, 00:14:19.911 "abort": true, 00:14:19.911 "seek_hole": false, 00:14:19.911 "seek_data": false, 00:14:19.911 "copy": true, 00:14:19.911 "nvme_iov_md": false 00:14:19.911 }, 00:14:19.911 "memory_domains": [ 00:14:19.911 { 00:14:19.911 "dma_device_id": "system", 00:14:19.911 "dma_device_type": 1 00:14:19.911 }, 00:14:19.911 { 00:14:19.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.911 "dma_device_type": 2 00:14:19.911 } 00:14:19.911 ], 00:14:19.911 "driver_specific": {} 00:14:19.911 } 00:14:19.911 ] 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.911 "name": "Existed_Raid", 00:14:19.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.911 "strip_size_kb": 0, 00:14:19.911 "state": "configuring", 00:14:19.911 "raid_level": "raid1", 00:14:19.911 "superblock": false, 00:14:19.911 "num_base_bdevs": 3, 00:14:19.911 "num_base_bdevs_discovered": 2, 00:14:19.911 "num_base_bdevs_operational": 3, 00:14:19.911 "base_bdevs_list": [ 00:14:19.911 { 00:14:19.911 "name": "BaseBdev1", 00:14:19.911 "uuid": "ff878109-174b-4a79-b688-ba99e86d624e", 00:14:19.911 "is_configured": true, 00:14:19.911 "data_offset": 0, 00:14:19.911 "data_size": 65536 00:14:19.911 }, 00:14:19.911 { 00:14:19.911 "name": "BaseBdev2", 00:14:19.911 "uuid": "d01fd8ea-35a9-4f95-afd9-ed4b6a87d922", 00:14:19.911 "is_configured": true, 00:14:19.911 "data_offset": 0, 00:14:19.911 "data_size": 65536 00:14:19.911 }, 00:14:19.911 { 00:14:19.911 "name": "BaseBdev3", 00:14:19.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.911 "is_configured": false, 00:14:19.911 "data_offset": 0, 00:14:19.911 "data_size": 0 00:14:19.911 } 00:14:19.911 ] 00:14:19.911 }' 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.911 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.477 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:20.477 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.477 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.477 [2024-12-06 06:40:38.887162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:20.477 [2024-12-06 06:40:38.887254] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:20.477 [2024-12-06 06:40:38.887276] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:20.477 [2024-12-06 06:40:38.887692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:20.477 [2024-12-06 06:40:38.887949] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:20.477 [2024-12-06 06:40:38.887975] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:20.477 [2024-12-06 06:40:38.888324] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.477 BaseBdev3 00:14:20.477 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.477 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:20.477 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:20.477 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:20.477 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:20.477 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:20.477 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:20.477 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:20.477 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.478 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.478 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.478 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:20.478 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.478 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.478 [ 00:14:20.478 { 00:14:20.478 "name": "BaseBdev3", 00:14:20.478 "aliases": [ 00:14:20.478 "95ba1406-2c20-40df-a664-54d68fcd87bf" 00:14:20.478 ], 00:14:20.478 "product_name": "Malloc disk", 00:14:20.478 "block_size": 512, 00:14:20.478 "num_blocks": 65536, 00:14:20.478 "uuid": "95ba1406-2c20-40df-a664-54d68fcd87bf", 00:14:20.478 "assigned_rate_limits": { 00:14:20.478 "rw_ios_per_sec": 0, 00:14:20.478 "rw_mbytes_per_sec": 0, 00:14:20.478 "r_mbytes_per_sec": 0, 00:14:20.478 "w_mbytes_per_sec": 0 00:14:20.478 }, 00:14:20.478 "claimed": true, 00:14:20.478 "claim_type": "exclusive_write", 00:14:20.478 "zoned": false, 00:14:20.478 "supported_io_types": { 00:14:20.478 "read": true, 00:14:20.478 "write": true, 00:14:20.478 "unmap": true, 00:14:20.478 "flush": true, 00:14:20.478 "reset": true, 00:14:20.478 "nvme_admin": false, 00:14:20.478 "nvme_io": false, 00:14:20.478 "nvme_io_md": false, 00:14:20.478 "write_zeroes": true, 00:14:20.478 "zcopy": true, 00:14:20.478 "get_zone_info": false, 00:14:20.478 "zone_management": false, 00:14:20.478 "zone_append": false, 00:14:20.478 "compare": false, 00:14:20.478 "compare_and_write": false, 00:14:20.478 "abort": true, 00:14:20.478 "seek_hole": false, 00:14:20.478 "seek_data": false, 00:14:20.478 "copy": true, 00:14:20.478 "nvme_iov_md": false 00:14:20.478 }, 00:14:20.478 "memory_domains": [ 00:14:20.478 { 00:14:20.478 "dma_device_id": "system", 00:14:20.478 "dma_device_type": 1 00:14:20.478 }, 00:14:20.478 { 00:14:20.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.478 "dma_device_type": 2 00:14:20.478 } 00:14:20.478 ], 00:14:20.478 "driver_specific": {} 00:14:20.478 } 00:14:20.478 ] 00:14:20.478 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.478 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:20.478 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:20.478 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:20.478 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:20.478 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.478 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.478 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.478 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.478 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.478 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.478 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.478 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.478 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.478 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.478 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.478 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.478 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.478 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.478 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.478 "name": "Existed_Raid", 00:14:20.478 "uuid": "20ca0b1c-5455-4917-912d-7a4bb1c43bdb", 00:14:20.478 "strip_size_kb": 0, 00:14:20.478 "state": "online", 00:14:20.478 "raid_level": "raid1", 00:14:20.478 "superblock": false, 00:14:20.478 "num_base_bdevs": 3, 00:14:20.478 "num_base_bdevs_discovered": 3, 00:14:20.478 "num_base_bdevs_operational": 3, 00:14:20.478 "base_bdevs_list": [ 00:14:20.478 { 00:14:20.478 "name": "BaseBdev1", 00:14:20.478 "uuid": "ff878109-174b-4a79-b688-ba99e86d624e", 00:14:20.478 "is_configured": true, 00:14:20.478 "data_offset": 0, 00:14:20.478 "data_size": 65536 00:14:20.478 }, 00:14:20.478 { 00:14:20.478 "name": "BaseBdev2", 00:14:20.478 "uuid": "d01fd8ea-35a9-4f95-afd9-ed4b6a87d922", 00:14:20.478 "is_configured": true, 00:14:20.478 "data_offset": 0, 00:14:20.478 "data_size": 65536 00:14:20.478 }, 00:14:20.478 { 00:14:20.478 "name": "BaseBdev3", 00:14:20.478 "uuid": "95ba1406-2c20-40df-a664-54d68fcd87bf", 00:14:20.478 "is_configured": true, 00:14:20.478 "data_offset": 0, 00:14:20.478 "data_size": 65536 00:14:20.478 } 00:14:20.478 ] 00:14:20.478 }' 00:14:20.478 06:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.478 06:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:21.045 [2024-12-06 06:40:39.431839] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:21.045 "name": "Existed_Raid", 00:14:21.045 "aliases": [ 00:14:21.045 "20ca0b1c-5455-4917-912d-7a4bb1c43bdb" 00:14:21.045 ], 00:14:21.045 "product_name": "Raid Volume", 00:14:21.045 "block_size": 512, 00:14:21.045 "num_blocks": 65536, 00:14:21.045 "uuid": "20ca0b1c-5455-4917-912d-7a4bb1c43bdb", 00:14:21.045 "assigned_rate_limits": { 00:14:21.045 "rw_ios_per_sec": 0, 00:14:21.045 "rw_mbytes_per_sec": 0, 00:14:21.045 "r_mbytes_per_sec": 0, 00:14:21.045 "w_mbytes_per_sec": 0 00:14:21.045 }, 00:14:21.045 "claimed": false, 00:14:21.045 "zoned": false, 00:14:21.045 "supported_io_types": { 00:14:21.045 "read": true, 00:14:21.045 "write": true, 00:14:21.045 "unmap": false, 00:14:21.045 "flush": false, 00:14:21.045 "reset": true, 00:14:21.045 "nvme_admin": false, 00:14:21.045 "nvme_io": false, 00:14:21.045 "nvme_io_md": false, 00:14:21.045 "write_zeroes": true, 00:14:21.045 "zcopy": false, 00:14:21.045 "get_zone_info": false, 00:14:21.045 "zone_management": false, 00:14:21.045 "zone_append": false, 00:14:21.045 "compare": false, 00:14:21.045 "compare_and_write": false, 00:14:21.045 "abort": false, 00:14:21.045 "seek_hole": false, 00:14:21.045 "seek_data": false, 00:14:21.045 "copy": false, 00:14:21.045 "nvme_iov_md": false 00:14:21.045 }, 00:14:21.045 "memory_domains": [ 00:14:21.045 { 00:14:21.045 "dma_device_id": "system", 00:14:21.045 "dma_device_type": 1 00:14:21.045 }, 00:14:21.045 { 00:14:21.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.045 "dma_device_type": 2 00:14:21.045 }, 00:14:21.045 { 00:14:21.045 "dma_device_id": "system", 00:14:21.045 "dma_device_type": 1 00:14:21.045 }, 00:14:21.045 { 00:14:21.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.045 "dma_device_type": 2 00:14:21.045 }, 00:14:21.045 { 00:14:21.045 "dma_device_id": "system", 00:14:21.045 "dma_device_type": 1 00:14:21.045 }, 00:14:21.045 { 00:14:21.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.045 "dma_device_type": 2 00:14:21.045 } 00:14:21.045 ], 00:14:21.045 "driver_specific": { 00:14:21.045 "raid": { 00:14:21.045 "uuid": "20ca0b1c-5455-4917-912d-7a4bb1c43bdb", 00:14:21.045 "strip_size_kb": 0, 00:14:21.045 "state": "online", 00:14:21.045 "raid_level": "raid1", 00:14:21.045 "superblock": false, 00:14:21.045 "num_base_bdevs": 3, 00:14:21.045 "num_base_bdevs_discovered": 3, 00:14:21.045 "num_base_bdevs_operational": 3, 00:14:21.045 "base_bdevs_list": [ 00:14:21.045 { 00:14:21.045 "name": "BaseBdev1", 00:14:21.045 "uuid": "ff878109-174b-4a79-b688-ba99e86d624e", 00:14:21.045 "is_configured": true, 00:14:21.045 "data_offset": 0, 00:14:21.045 "data_size": 65536 00:14:21.045 }, 00:14:21.045 { 00:14:21.045 "name": "BaseBdev2", 00:14:21.045 "uuid": "d01fd8ea-35a9-4f95-afd9-ed4b6a87d922", 00:14:21.045 "is_configured": true, 00:14:21.045 "data_offset": 0, 00:14:21.045 "data_size": 65536 00:14:21.045 }, 00:14:21.045 { 00:14:21.045 "name": "BaseBdev3", 00:14:21.045 "uuid": "95ba1406-2c20-40df-a664-54d68fcd87bf", 00:14:21.045 "is_configured": true, 00:14:21.045 "data_offset": 0, 00:14:21.045 "data_size": 65536 00:14:21.045 } 00:14:21.045 ] 00:14:21.045 } 00:14:21.045 } 00:14:21.045 }' 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:21.045 BaseBdev2 00:14:21.045 BaseBdev3' 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.045 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.046 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.046 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:21.304 06:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.304 06:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.304 06:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.304 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.304 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.304 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:21.304 06:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.304 06:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.304 [2024-12-06 06:40:39.743527] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:21.304 06:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.304 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:21.304 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:21.304 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:21.304 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:21.304 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:21.304 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:21.304 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.304 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.304 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:21.304 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:21.304 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:21.304 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.304 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.304 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.304 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.304 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.304 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.304 06:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.304 06:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.304 06:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.304 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.305 "name": "Existed_Raid", 00:14:21.305 "uuid": "20ca0b1c-5455-4917-912d-7a4bb1c43bdb", 00:14:21.305 "strip_size_kb": 0, 00:14:21.305 "state": "online", 00:14:21.305 "raid_level": "raid1", 00:14:21.305 "superblock": false, 00:14:21.305 "num_base_bdevs": 3, 00:14:21.305 "num_base_bdevs_discovered": 2, 00:14:21.305 "num_base_bdevs_operational": 2, 00:14:21.305 "base_bdevs_list": [ 00:14:21.305 { 00:14:21.305 "name": null, 00:14:21.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.305 "is_configured": false, 00:14:21.305 "data_offset": 0, 00:14:21.305 "data_size": 65536 00:14:21.305 }, 00:14:21.305 { 00:14:21.305 "name": "BaseBdev2", 00:14:21.305 "uuid": "d01fd8ea-35a9-4f95-afd9-ed4b6a87d922", 00:14:21.305 "is_configured": true, 00:14:21.305 "data_offset": 0, 00:14:21.305 "data_size": 65536 00:14:21.305 }, 00:14:21.305 { 00:14:21.305 "name": "BaseBdev3", 00:14:21.305 "uuid": "95ba1406-2c20-40df-a664-54d68fcd87bf", 00:14:21.305 "is_configured": true, 00:14:21.305 "data_offset": 0, 00:14:21.305 "data_size": 65536 00:14:21.305 } 00:14:21.305 ] 00:14:21.305 }' 00:14:21.305 06:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.305 06:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.891 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:21.891 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:21.891 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.891 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.891 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.891 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:21.891 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.891 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:21.891 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:21.891 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:21.891 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.891 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.891 [2024-12-06 06:40:40.420484] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:21.891 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.891 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:21.891 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:21.891 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.891 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.891 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.891 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:21.891 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.149 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:22.149 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:22.149 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:22.149 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.149 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.149 [2024-12-06 06:40:40.572452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:22.149 [2024-12-06 06:40:40.572655] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:22.149 [2024-12-06 06:40:40.668100] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:22.149 [2024-12-06 06:40:40.668191] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:22.149 [2024-12-06 06:40:40.668216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:22.149 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.149 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:22.149 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:22.149 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:22.149 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.149 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.149 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.149 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.149 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:22.149 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:22.149 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:22.149 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:22.149 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:22.149 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:22.150 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.150 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.150 BaseBdev2 00:14:22.150 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.150 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:22.150 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:22.150 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:22.150 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:22.150 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:22.150 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:22.150 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:22.150 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.150 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.150 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.150 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:22.150 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.150 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.150 [ 00:14:22.150 { 00:14:22.150 "name": "BaseBdev2", 00:14:22.150 "aliases": [ 00:14:22.150 "bf2aec4e-c29c-4e64-a555-def9d534af46" 00:14:22.150 ], 00:14:22.150 "product_name": "Malloc disk", 00:14:22.150 "block_size": 512, 00:14:22.150 "num_blocks": 65536, 00:14:22.150 "uuid": "bf2aec4e-c29c-4e64-a555-def9d534af46", 00:14:22.150 "assigned_rate_limits": { 00:14:22.150 "rw_ios_per_sec": 0, 00:14:22.150 "rw_mbytes_per_sec": 0, 00:14:22.150 "r_mbytes_per_sec": 0, 00:14:22.150 "w_mbytes_per_sec": 0 00:14:22.150 }, 00:14:22.150 "claimed": false, 00:14:22.150 "zoned": false, 00:14:22.150 "supported_io_types": { 00:14:22.150 "read": true, 00:14:22.150 "write": true, 00:14:22.150 "unmap": true, 00:14:22.150 "flush": true, 00:14:22.150 "reset": true, 00:14:22.150 "nvme_admin": false, 00:14:22.150 "nvme_io": false, 00:14:22.150 "nvme_io_md": false, 00:14:22.150 "write_zeroes": true, 00:14:22.150 "zcopy": true, 00:14:22.150 "get_zone_info": false, 00:14:22.150 "zone_management": false, 00:14:22.150 "zone_append": false, 00:14:22.150 "compare": false, 00:14:22.150 "compare_and_write": false, 00:14:22.150 "abort": true, 00:14:22.150 "seek_hole": false, 00:14:22.408 "seek_data": false, 00:14:22.408 "copy": true, 00:14:22.408 "nvme_iov_md": false 00:14:22.408 }, 00:14:22.408 "memory_domains": [ 00:14:22.408 { 00:14:22.408 "dma_device_id": "system", 00:14:22.408 "dma_device_type": 1 00:14:22.408 }, 00:14:22.408 { 00:14:22.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.408 "dma_device_type": 2 00:14:22.408 } 00:14:22.408 ], 00:14:22.408 "driver_specific": {} 00:14:22.408 } 00:14:22.408 ] 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.408 BaseBdev3 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.408 [ 00:14:22.408 { 00:14:22.408 "name": "BaseBdev3", 00:14:22.408 "aliases": [ 00:14:22.408 "4b28ce67-4c60-4c5a-9290-4c2eb7577b0e" 00:14:22.408 ], 00:14:22.408 "product_name": "Malloc disk", 00:14:22.408 "block_size": 512, 00:14:22.408 "num_blocks": 65536, 00:14:22.408 "uuid": "4b28ce67-4c60-4c5a-9290-4c2eb7577b0e", 00:14:22.408 "assigned_rate_limits": { 00:14:22.408 "rw_ios_per_sec": 0, 00:14:22.408 "rw_mbytes_per_sec": 0, 00:14:22.408 "r_mbytes_per_sec": 0, 00:14:22.408 "w_mbytes_per_sec": 0 00:14:22.408 }, 00:14:22.408 "claimed": false, 00:14:22.408 "zoned": false, 00:14:22.408 "supported_io_types": { 00:14:22.408 "read": true, 00:14:22.408 "write": true, 00:14:22.408 "unmap": true, 00:14:22.408 "flush": true, 00:14:22.408 "reset": true, 00:14:22.408 "nvme_admin": false, 00:14:22.408 "nvme_io": false, 00:14:22.408 "nvme_io_md": false, 00:14:22.408 "write_zeroes": true, 00:14:22.408 "zcopy": true, 00:14:22.408 "get_zone_info": false, 00:14:22.408 "zone_management": false, 00:14:22.408 "zone_append": false, 00:14:22.408 "compare": false, 00:14:22.408 "compare_and_write": false, 00:14:22.408 "abort": true, 00:14:22.408 "seek_hole": false, 00:14:22.408 "seek_data": false, 00:14:22.408 "copy": true, 00:14:22.408 "nvme_iov_md": false 00:14:22.408 }, 00:14:22.408 "memory_domains": [ 00:14:22.408 { 00:14:22.408 "dma_device_id": "system", 00:14:22.408 "dma_device_type": 1 00:14:22.408 }, 00:14:22.408 { 00:14:22.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.408 "dma_device_type": 2 00:14:22.408 } 00:14:22.408 ], 00:14:22.408 "driver_specific": {} 00:14:22.408 } 00:14:22.408 ] 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.408 [2024-12-06 06:40:40.884485] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:22.408 [2024-12-06 06:40:40.884622] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:22.408 [2024-12-06 06:40:40.884659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:22.408 [2024-12-06 06:40:40.887521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.408 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.408 "name": "Existed_Raid", 00:14:22.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.408 "strip_size_kb": 0, 00:14:22.408 "state": "configuring", 00:14:22.408 "raid_level": "raid1", 00:14:22.408 "superblock": false, 00:14:22.408 "num_base_bdevs": 3, 00:14:22.408 "num_base_bdevs_discovered": 2, 00:14:22.408 "num_base_bdevs_operational": 3, 00:14:22.408 "base_bdevs_list": [ 00:14:22.408 { 00:14:22.408 "name": "BaseBdev1", 00:14:22.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.408 "is_configured": false, 00:14:22.408 "data_offset": 0, 00:14:22.408 "data_size": 0 00:14:22.408 }, 00:14:22.408 { 00:14:22.408 "name": "BaseBdev2", 00:14:22.409 "uuid": "bf2aec4e-c29c-4e64-a555-def9d534af46", 00:14:22.409 "is_configured": true, 00:14:22.409 "data_offset": 0, 00:14:22.409 "data_size": 65536 00:14:22.409 }, 00:14:22.409 { 00:14:22.409 "name": "BaseBdev3", 00:14:22.409 "uuid": "4b28ce67-4c60-4c5a-9290-4c2eb7577b0e", 00:14:22.409 "is_configured": true, 00:14:22.409 "data_offset": 0, 00:14:22.409 "data_size": 65536 00:14:22.409 } 00:14:22.409 ] 00:14:22.409 }' 00:14:22.409 06:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.409 06:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.975 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:22.975 06:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.975 06:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.975 [2024-12-06 06:40:41.428650] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:22.975 06:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.975 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:22.975 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.975 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.975 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.975 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.975 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:22.975 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.975 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.975 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.975 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.975 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.975 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.975 06:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.975 06:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.975 06:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.975 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.975 "name": "Existed_Raid", 00:14:22.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.975 "strip_size_kb": 0, 00:14:22.975 "state": "configuring", 00:14:22.975 "raid_level": "raid1", 00:14:22.975 "superblock": false, 00:14:22.975 "num_base_bdevs": 3, 00:14:22.975 "num_base_bdevs_discovered": 1, 00:14:22.975 "num_base_bdevs_operational": 3, 00:14:22.975 "base_bdevs_list": [ 00:14:22.975 { 00:14:22.975 "name": "BaseBdev1", 00:14:22.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.975 "is_configured": false, 00:14:22.975 "data_offset": 0, 00:14:22.975 "data_size": 0 00:14:22.975 }, 00:14:22.975 { 00:14:22.975 "name": null, 00:14:22.975 "uuid": "bf2aec4e-c29c-4e64-a555-def9d534af46", 00:14:22.975 "is_configured": false, 00:14:22.975 "data_offset": 0, 00:14:22.975 "data_size": 65536 00:14:22.975 }, 00:14:22.975 { 00:14:22.975 "name": "BaseBdev3", 00:14:22.975 "uuid": "4b28ce67-4c60-4c5a-9290-4c2eb7577b0e", 00:14:22.975 "is_configured": true, 00:14:22.975 "data_offset": 0, 00:14:22.975 "data_size": 65536 00:14:22.975 } 00:14:22.975 ] 00:14:22.975 }' 00:14:22.975 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.975 06:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.541 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.541 06:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.541 06:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.541 06:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:23.542 06:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.542 [2024-12-06 06:40:42.055362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:23.542 BaseBdev1 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.542 [ 00:14:23.542 { 00:14:23.542 "name": "BaseBdev1", 00:14:23.542 "aliases": [ 00:14:23.542 "80102799-e8c8-43b7-9819-d451df051906" 00:14:23.542 ], 00:14:23.542 "product_name": "Malloc disk", 00:14:23.542 "block_size": 512, 00:14:23.542 "num_blocks": 65536, 00:14:23.542 "uuid": "80102799-e8c8-43b7-9819-d451df051906", 00:14:23.542 "assigned_rate_limits": { 00:14:23.542 "rw_ios_per_sec": 0, 00:14:23.542 "rw_mbytes_per_sec": 0, 00:14:23.542 "r_mbytes_per_sec": 0, 00:14:23.542 "w_mbytes_per_sec": 0 00:14:23.542 }, 00:14:23.542 "claimed": true, 00:14:23.542 "claim_type": "exclusive_write", 00:14:23.542 "zoned": false, 00:14:23.542 "supported_io_types": { 00:14:23.542 "read": true, 00:14:23.542 "write": true, 00:14:23.542 "unmap": true, 00:14:23.542 "flush": true, 00:14:23.542 "reset": true, 00:14:23.542 "nvme_admin": false, 00:14:23.542 "nvme_io": false, 00:14:23.542 "nvme_io_md": false, 00:14:23.542 "write_zeroes": true, 00:14:23.542 "zcopy": true, 00:14:23.542 "get_zone_info": false, 00:14:23.542 "zone_management": false, 00:14:23.542 "zone_append": false, 00:14:23.542 "compare": false, 00:14:23.542 "compare_and_write": false, 00:14:23.542 "abort": true, 00:14:23.542 "seek_hole": false, 00:14:23.542 "seek_data": false, 00:14:23.542 "copy": true, 00:14:23.542 "nvme_iov_md": false 00:14:23.542 }, 00:14:23.542 "memory_domains": [ 00:14:23.542 { 00:14:23.542 "dma_device_id": "system", 00:14:23.542 "dma_device_type": 1 00:14:23.542 }, 00:14:23.542 { 00:14:23.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.542 "dma_device_type": 2 00:14:23.542 } 00:14:23.542 ], 00:14:23.542 "driver_specific": {} 00:14:23.542 } 00:14:23.542 ] 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.542 "name": "Existed_Raid", 00:14:23.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.542 "strip_size_kb": 0, 00:14:23.542 "state": "configuring", 00:14:23.542 "raid_level": "raid1", 00:14:23.542 "superblock": false, 00:14:23.542 "num_base_bdevs": 3, 00:14:23.542 "num_base_bdevs_discovered": 2, 00:14:23.542 "num_base_bdevs_operational": 3, 00:14:23.542 "base_bdevs_list": [ 00:14:23.542 { 00:14:23.542 "name": "BaseBdev1", 00:14:23.542 "uuid": "80102799-e8c8-43b7-9819-d451df051906", 00:14:23.542 "is_configured": true, 00:14:23.542 "data_offset": 0, 00:14:23.542 "data_size": 65536 00:14:23.542 }, 00:14:23.542 { 00:14:23.542 "name": null, 00:14:23.542 "uuid": "bf2aec4e-c29c-4e64-a555-def9d534af46", 00:14:23.542 "is_configured": false, 00:14:23.542 "data_offset": 0, 00:14:23.542 "data_size": 65536 00:14:23.542 }, 00:14:23.542 { 00:14:23.542 "name": "BaseBdev3", 00:14:23.542 "uuid": "4b28ce67-4c60-4c5a-9290-4c2eb7577b0e", 00:14:23.542 "is_configured": true, 00:14:23.542 "data_offset": 0, 00:14:23.542 "data_size": 65536 00:14:23.542 } 00:14:23.542 ] 00:14:23.542 }' 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.542 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.110 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:24.110 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.110 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.110 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.110 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.110 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:24.110 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:24.110 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.110 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.110 [2024-12-06 06:40:42.703561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:24.110 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.110 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:24.110 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.110 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.110 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.110 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.110 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.110 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.110 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.110 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.110 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.110 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.110 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.110 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.110 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.110 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.369 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.369 "name": "Existed_Raid", 00:14:24.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.369 "strip_size_kb": 0, 00:14:24.369 "state": "configuring", 00:14:24.369 "raid_level": "raid1", 00:14:24.369 "superblock": false, 00:14:24.369 "num_base_bdevs": 3, 00:14:24.369 "num_base_bdevs_discovered": 1, 00:14:24.369 "num_base_bdevs_operational": 3, 00:14:24.369 "base_bdevs_list": [ 00:14:24.369 { 00:14:24.369 "name": "BaseBdev1", 00:14:24.369 "uuid": "80102799-e8c8-43b7-9819-d451df051906", 00:14:24.369 "is_configured": true, 00:14:24.369 "data_offset": 0, 00:14:24.369 "data_size": 65536 00:14:24.369 }, 00:14:24.369 { 00:14:24.369 "name": null, 00:14:24.369 "uuid": "bf2aec4e-c29c-4e64-a555-def9d534af46", 00:14:24.369 "is_configured": false, 00:14:24.369 "data_offset": 0, 00:14:24.369 "data_size": 65536 00:14:24.369 }, 00:14:24.369 { 00:14:24.369 "name": null, 00:14:24.369 "uuid": "4b28ce67-4c60-4c5a-9290-4c2eb7577b0e", 00:14:24.369 "is_configured": false, 00:14:24.369 "data_offset": 0, 00:14:24.369 "data_size": 65536 00:14:24.369 } 00:14:24.369 ] 00:14:24.369 }' 00:14:24.369 06:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.369 06:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.628 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.628 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:24.628 06:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.629 06:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.629 06:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.888 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:24.888 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:24.888 06:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.888 06:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.888 [2024-12-06 06:40:43.291904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:24.888 06:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.888 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:24.888 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.888 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.888 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.888 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.888 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.888 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.888 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.888 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.888 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.888 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.888 06:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.888 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.888 06:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.888 06:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.888 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.888 "name": "Existed_Raid", 00:14:24.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.888 "strip_size_kb": 0, 00:14:24.888 "state": "configuring", 00:14:24.888 "raid_level": "raid1", 00:14:24.888 "superblock": false, 00:14:24.888 "num_base_bdevs": 3, 00:14:24.888 "num_base_bdevs_discovered": 2, 00:14:24.888 "num_base_bdevs_operational": 3, 00:14:24.888 "base_bdevs_list": [ 00:14:24.888 { 00:14:24.888 "name": "BaseBdev1", 00:14:24.888 "uuid": "80102799-e8c8-43b7-9819-d451df051906", 00:14:24.888 "is_configured": true, 00:14:24.888 "data_offset": 0, 00:14:24.888 "data_size": 65536 00:14:24.888 }, 00:14:24.888 { 00:14:24.888 "name": null, 00:14:24.888 "uuid": "bf2aec4e-c29c-4e64-a555-def9d534af46", 00:14:24.888 "is_configured": false, 00:14:24.888 "data_offset": 0, 00:14:24.888 "data_size": 65536 00:14:24.888 }, 00:14:24.888 { 00:14:24.888 "name": "BaseBdev3", 00:14:24.888 "uuid": "4b28ce67-4c60-4c5a-9290-4c2eb7577b0e", 00:14:24.888 "is_configured": true, 00:14:24.888 "data_offset": 0, 00:14:24.888 "data_size": 65536 00:14:24.888 } 00:14:24.888 ] 00:14:24.888 }' 00:14:24.889 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.889 06:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.146 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.146 06:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.146 06:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.146 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:25.405 06:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.405 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:25.405 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:25.405 06:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.405 06:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.405 [2024-12-06 06:40:43.836172] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:25.405 06:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.405 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:25.405 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.405 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.405 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.405 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.405 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.405 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.405 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.405 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.405 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.405 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.405 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.405 06:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.405 06:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.405 06:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.405 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.405 "name": "Existed_Raid", 00:14:25.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.405 "strip_size_kb": 0, 00:14:25.405 "state": "configuring", 00:14:25.405 "raid_level": "raid1", 00:14:25.405 "superblock": false, 00:14:25.405 "num_base_bdevs": 3, 00:14:25.405 "num_base_bdevs_discovered": 1, 00:14:25.405 "num_base_bdevs_operational": 3, 00:14:25.405 "base_bdevs_list": [ 00:14:25.405 { 00:14:25.405 "name": null, 00:14:25.405 "uuid": "80102799-e8c8-43b7-9819-d451df051906", 00:14:25.405 "is_configured": false, 00:14:25.405 "data_offset": 0, 00:14:25.405 "data_size": 65536 00:14:25.405 }, 00:14:25.405 { 00:14:25.405 "name": null, 00:14:25.405 "uuid": "bf2aec4e-c29c-4e64-a555-def9d534af46", 00:14:25.405 "is_configured": false, 00:14:25.405 "data_offset": 0, 00:14:25.405 "data_size": 65536 00:14:25.405 }, 00:14:25.405 { 00:14:25.405 "name": "BaseBdev3", 00:14:25.405 "uuid": "4b28ce67-4c60-4c5a-9290-4c2eb7577b0e", 00:14:25.405 "is_configured": true, 00:14:25.405 "data_offset": 0, 00:14:25.405 "data_size": 65536 00:14:25.405 } 00:14:25.405 ] 00:14:25.405 }' 00:14:25.405 06:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.405 06:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.059 06:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.059 06:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.059 06:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:26.059 06:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.059 06:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.059 06:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:26.059 06:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:26.059 06:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.059 06:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.059 [2024-12-06 06:40:44.508708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:26.059 06:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.059 06:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:26.059 06:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.059 06:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:26.059 06:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.059 06:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.059 06:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.059 06:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.059 06:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.059 06:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.059 06:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.059 06:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.059 06:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.059 06:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.059 06:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.059 06:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.059 06:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.059 "name": "Existed_Raid", 00:14:26.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.059 "strip_size_kb": 0, 00:14:26.059 "state": "configuring", 00:14:26.059 "raid_level": "raid1", 00:14:26.059 "superblock": false, 00:14:26.059 "num_base_bdevs": 3, 00:14:26.059 "num_base_bdevs_discovered": 2, 00:14:26.059 "num_base_bdevs_operational": 3, 00:14:26.059 "base_bdevs_list": [ 00:14:26.059 { 00:14:26.059 "name": null, 00:14:26.059 "uuid": "80102799-e8c8-43b7-9819-d451df051906", 00:14:26.059 "is_configured": false, 00:14:26.059 "data_offset": 0, 00:14:26.059 "data_size": 65536 00:14:26.059 }, 00:14:26.059 { 00:14:26.059 "name": "BaseBdev2", 00:14:26.059 "uuid": "bf2aec4e-c29c-4e64-a555-def9d534af46", 00:14:26.059 "is_configured": true, 00:14:26.059 "data_offset": 0, 00:14:26.059 "data_size": 65536 00:14:26.059 }, 00:14:26.059 { 00:14:26.059 "name": "BaseBdev3", 00:14:26.059 "uuid": "4b28ce67-4c60-4c5a-9290-4c2eb7577b0e", 00:14:26.059 "is_configured": true, 00:14:26.059 "data_offset": 0, 00:14:26.059 "data_size": 65536 00:14:26.059 } 00:14:26.059 ] 00:14:26.059 }' 00:14:26.059 06:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.059 06:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 80102799-e8c8-43b7-9819-d451df051906 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.626 [2024-12-06 06:40:45.198166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:26.626 [2024-12-06 06:40:45.198485] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:26.626 [2024-12-06 06:40:45.198509] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:26.626 [2024-12-06 06:40:45.198887] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:26.626 [2024-12-06 06:40:45.199096] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:26.626 [2024-12-06 06:40:45.199118] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:26.626 [2024-12-06 06:40:45.199446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.626 NewBaseBdev 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.626 [ 00:14:26.626 { 00:14:26.626 "name": "NewBaseBdev", 00:14:26.626 "aliases": [ 00:14:26.626 "80102799-e8c8-43b7-9819-d451df051906" 00:14:26.626 ], 00:14:26.626 "product_name": "Malloc disk", 00:14:26.626 "block_size": 512, 00:14:26.626 "num_blocks": 65536, 00:14:26.626 "uuid": "80102799-e8c8-43b7-9819-d451df051906", 00:14:26.626 "assigned_rate_limits": { 00:14:26.626 "rw_ios_per_sec": 0, 00:14:26.626 "rw_mbytes_per_sec": 0, 00:14:26.626 "r_mbytes_per_sec": 0, 00:14:26.626 "w_mbytes_per_sec": 0 00:14:26.626 }, 00:14:26.626 "claimed": true, 00:14:26.626 "claim_type": "exclusive_write", 00:14:26.626 "zoned": false, 00:14:26.626 "supported_io_types": { 00:14:26.626 "read": true, 00:14:26.626 "write": true, 00:14:26.626 "unmap": true, 00:14:26.626 "flush": true, 00:14:26.626 "reset": true, 00:14:26.626 "nvme_admin": false, 00:14:26.626 "nvme_io": false, 00:14:26.626 "nvme_io_md": false, 00:14:26.626 "write_zeroes": true, 00:14:26.626 "zcopy": true, 00:14:26.626 "get_zone_info": false, 00:14:26.626 "zone_management": false, 00:14:26.626 "zone_append": false, 00:14:26.626 "compare": false, 00:14:26.626 "compare_and_write": false, 00:14:26.626 "abort": true, 00:14:26.626 "seek_hole": false, 00:14:26.626 "seek_data": false, 00:14:26.626 "copy": true, 00:14:26.626 "nvme_iov_md": false 00:14:26.626 }, 00:14:26.626 "memory_domains": [ 00:14:26.626 { 00:14:26.626 "dma_device_id": "system", 00:14:26.626 "dma_device_type": 1 00:14:26.626 }, 00:14:26.626 { 00:14:26.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.626 "dma_device_type": 2 00:14:26.626 } 00:14:26.626 ], 00:14:26.626 "driver_specific": {} 00:14:26.626 } 00:14:26.626 ] 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.626 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.884 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.884 "name": "Existed_Raid", 00:14:26.884 "uuid": "33e854d8-b22d-4b27-a0a2-0263eb5fa072", 00:14:26.884 "strip_size_kb": 0, 00:14:26.884 "state": "online", 00:14:26.884 "raid_level": "raid1", 00:14:26.884 "superblock": false, 00:14:26.884 "num_base_bdevs": 3, 00:14:26.884 "num_base_bdevs_discovered": 3, 00:14:26.884 "num_base_bdevs_operational": 3, 00:14:26.884 "base_bdevs_list": [ 00:14:26.884 { 00:14:26.884 "name": "NewBaseBdev", 00:14:26.884 "uuid": "80102799-e8c8-43b7-9819-d451df051906", 00:14:26.884 "is_configured": true, 00:14:26.884 "data_offset": 0, 00:14:26.884 "data_size": 65536 00:14:26.884 }, 00:14:26.884 { 00:14:26.884 "name": "BaseBdev2", 00:14:26.884 "uuid": "bf2aec4e-c29c-4e64-a555-def9d534af46", 00:14:26.884 "is_configured": true, 00:14:26.884 "data_offset": 0, 00:14:26.884 "data_size": 65536 00:14:26.884 }, 00:14:26.884 { 00:14:26.884 "name": "BaseBdev3", 00:14:26.884 "uuid": "4b28ce67-4c60-4c5a-9290-4c2eb7577b0e", 00:14:26.884 "is_configured": true, 00:14:26.884 "data_offset": 0, 00:14:26.884 "data_size": 65536 00:14:26.884 } 00:14:26.884 ] 00:14:26.884 }' 00:14:26.884 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.884 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.143 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:27.143 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:27.143 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:27.143 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:27.143 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:27.143 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:27.143 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:27.143 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:27.143 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.143 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.143 [2024-12-06 06:40:45.738896] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:27.143 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.143 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:27.143 "name": "Existed_Raid", 00:14:27.143 "aliases": [ 00:14:27.143 "33e854d8-b22d-4b27-a0a2-0263eb5fa072" 00:14:27.143 ], 00:14:27.143 "product_name": "Raid Volume", 00:14:27.143 "block_size": 512, 00:14:27.143 "num_blocks": 65536, 00:14:27.143 "uuid": "33e854d8-b22d-4b27-a0a2-0263eb5fa072", 00:14:27.143 "assigned_rate_limits": { 00:14:27.143 "rw_ios_per_sec": 0, 00:14:27.143 "rw_mbytes_per_sec": 0, 00:14:27.143 "r_mbytes_per_sec": 0, 00:14:27.143 "w_mbytes_per_sec": 0 00:14:27.143 }, 00:14:27.143 "claimed": false, 00:14:27.143 "zoned": false, 00:14:27.143 "supported_io_types": { 00:14:27.143 "read": true, 00:14:27.143 "write": true, 00:14:27.143 "unmap": false, 00:14:27.143 "flush": false, 00:14:27.143 "reset": true, 00:14:27.143 "nvme_admin": false, 00:14:27.143 "nvme_io": false, 00:14:27.143 "nvme_io_md": false, 00:14:27.143 "write_zeroes": true, 00:14:27.143 "zcopy": false, 00:14:27.143 "get_zone_info": false, 00:14:27.143 "zone_management": false, 00:14:27.143 "zone_append": false, 00:14:27.143 "compare": false, 00:14:27.143 "compare_and_write": false, 00:14:27.143 "abort": false, 00:14:27.143 "seek_hole": false, 00:14:27.143 "seek_data": false, 00:14:27.143 "copy": false, 00:14:27.143 "nvme_iov_md": false 00:14:27.143 }, 00:14:27.143 "memory_domains": [ 00:14:27.143 { 00:14:27.143 "dma_device_id": "system", 00:14:27.143 "dma_device_type": 1 00:14:27.143 }, 00:14:27.143 { 00:14:27.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.143 "dma_device_type": 2 00:14:27.143 }, 00:14:27.143 { 00:14:27.143 "dma_device_id": "system", 00:14:27.143 "dma_device_type": 1 00:14:27.143 }, 00:14:27.143 { 00:14:27.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.143 "dma_device_type": 2 00:14:27.143 }, 00:14:27.143 { 00:14:27.143 "dma_device_id": "system", 00:14:27.143 "dma_device_type": 1 00:14:27.143 }, 00:14:27.143 { 00:14:27.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.143 "dma_device_type": 2 00:14:27.143 } 00:14:27.143 ], 00:14:27.143 "driver_specific": { 00:14:27.143 "raid": { 00:14:27.143 "uuid": "33e854d8-b22d-4b27-a0a2-0263eb5fa072", 00:14:27.143 "strip_size_kb": 0, 00:14:27.143 "state": "online", 00:14:27.143 "raid_level": "raid1", 00:14:27.143 "superblock": false, 00:14:27.143 "num_base_bdevs": 3, 00:14:27.143 "num_base_bdevs_discovered": 3, 00:14:27.143 "num_base_bdevs_operational": 3, 00:14:27.143 "base_bdevs_list": [ 00:14:27.143 { 00:14:27.143 "name": "NewBaseBdev", 00:14:27.143 "uuid": "80102799-e8c8-43b7-9819-d451df051906", 00:14:27.143 "is_configured": true, 00:14:27.143 "data_offset": 0, 00:14:27.143 "data_size": 65536 00:14:27.143 }, 00:14:27.143 { 00:14:27.143 "name": "BaseBdev2", 00:14:27.143 "uuid": "bf2aec4e-c29c-4e64-a555-def9d534af46", 00:14:27.143 "is_configured": true, 00:14:27.143 "data_offset": 0, 00:14:27.143 "data_size": 65536 00:14:27.143 }, 00:14:27.143 { 00:14:27.143 "name": "BaseBdev3", 00:14:27.143 "uuid": "4b28ce67-4c60-4c5a-9290-4c2eb7577b0e", 00:14:27.143 "is_configured": true, 00:14:27.143 "data_offset": 0, 00:14:27.143 "data_size": 65536 00:14:27.143 } 00:14:27.143 ] 00:14:27.143 } 00:14:27.143 } 00:14:27.143 }' 00:14:27.143 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:27.402 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:27.402 BaseBdev2 00:14:27.402 BaseBdev3' 00:14:27.402 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.402 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:27.402 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.402 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:27.402 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.402 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.402 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.402 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.402 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.402 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.402 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.402 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:27.402 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.402 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.402 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.402 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.402 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.402 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.402 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.402 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:27.402 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.402 06:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.402 06:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.402 06:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.661 06:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.661 06:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.661 06:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:27.661 06:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.661 06:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.661 [2024-12-06 06:40:46.050600] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:27.661 [2024-12-06 06:40:46.050708] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:27.661 [2024-12-06 06:40:46.050851] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:27.661 [2024-12-06 06:40:46.051271] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:27.661 [2024-12-06 06:40:46.051291] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:27.661 06:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.661 06:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67640 00:14:27.661 06:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67640 ']' 00:14:27.661 06:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67640 00:14:27.661 06:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:27.661 06:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:27.661 06:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67640 00:14:27.661 killing process with pid 67640 00:14:27.661 06:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:27.661 06:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:27.661 06:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67640' 00:14:27.661 06:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67640 00:14:27.661 [2024-12-06 06:40:46.084575] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:27.661 06:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67640 00:14:27.919 [2024-12-06 06:40:46.388032] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:29.291 06:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:29.292 00:14:29.292 real 0m11.983s 00:14:29.292 user 0m19.682s 00:14:29.292 sys 0m1.693s 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:29.292 ************************************ 00:14:29.292 END TEST raid_state_function_test 00:14:29.292 ************************************ 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.292 06:40:47 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:14:29.292 06:40:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:29.292 06:40:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:29.292 06:40:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:29.292 ************************************ 00:14:29.292 START TEST raid_state_function_test_sb 00:14:29.292 ************************************ 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:29.292 Process raid pid: 68278 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68278 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68278' 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68278 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68278 ']' 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:29.292 06:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.292 [2024-12-06 06:40:47.708142] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:14:29.292 [2024-12-06 06:40:47.708327] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.292 [2024-12-06 06:40:47.889479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.549 [2024-12-06 06:40:48.039803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.809 [2024-12-06 06:40:48.272334] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.809 [2024-12-06 06:40:48.272405] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:30.378 06:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:30.378 06:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:30.378 06:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:30.378 06:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.378 06:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.378 [2024-12-06 06:40:48.743922] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:30.378 [2024-12-06 06:40:48.744044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:30.378 [2024-12-06 06:40:48.744062] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:30.378 [2024-12-06 06:40:48.744079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:30.378 [2024-12-06 06:40:48.744088] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:30.378 [2024-12-06 06:40:48.744104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:30.378 06:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.378 06:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:30.378 06:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.378 06:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.378 06:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.378 06:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.378 06:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.378 06:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.378 06:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.378 06:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.378 06:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.378 06:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.378 06:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.378 06:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.378 06:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.379 06:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.379 06:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.379 "name": "Existed_Raid", 00:14:30.379 "uuid": "1bbf13d6-a579-4313-aa7f-5e71dfb09b99", 00:14:30.379 "strip_size_kb": 0, 00:14:30.379 "state": "configuring", 00:14:30.379 "raid_level": "raid1", 00:14:30.379 "superblock": true, 00:14:30.379 "num_base_bdevs": 3, 00:14:30.379 "num_base_bdevs_discovered": 0, 00:14:30.379 "num_base_bdevs_operational": 3, 00:14:30.379 "base_bdevs_list": [ 00:14:30.379 { 00:14:30.379 "name": "BaseBdev1", 00:14:30.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.379 "is_configured": false, 00:14:30.379 "data_offset": 0, 00:14:30.379 "data_size": 0 00:14:30.379 }, 00:14:30.379 { 00:14:30.379 "name": "BaseBdev2", 00:14:30.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.379 "is_configured": false, 00:14:30.379 "data_offset": 0, 00:14:30.379 "data_size": 0 00:14:30.379 }, 00:14:30.379 { 00:14:30.379 "name": "BaseBdev3", 00:14:30.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.379 "is_configured": false, 00:14:30.379 "data_offset": 0, 00:14:30.379 "data_size": 0 00:14:30.379 } 00:14:30.379 ] 00:14:30.379 }' 00:14:30.379 06:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.379 06:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.637 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:30.637 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.637 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.637 [2024-12-06 06:40:49.231981] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:30.637 [2024-12-06 06:40:49.232057] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:30.637 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.637 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:30.637 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.637 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.637 [2024-12-06 06:40:49.239907] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:30.637 [2024-12-06 06:40:49.239975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:30.637 [2024-12-06 06:40:49.239991] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:30.637 [2024-12-06 06:40:49.240008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:30.637 [2024-12-06 06:40:49.240017] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:30.637 [2024-12-06 06:40:49.240033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:30.637 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.637 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:30.637 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.637 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.896 [2024-12-06 06:40:49.289149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:30.896 BaseBdev1 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.896 [ 00:14:30.896 { 00:14:30.896 "name": "BaseBdev1", 00:14:30.896 "aliases": [ 00:14:30.896 "da11e9ea-f139-4f2a-aa0c-c736ded87c2e" 00:14:30.896 ], 00:14:30.896 "product_name": "Malloc disk", 00:14:30.896 "block_size": 512, 00:14:30.896 "num_blocks": 65536, 00:14:30.896 "uuid": "da11e9ea-f139-4f2a-aa0c-c736ded87c2e", 00:14:30.896 "assigned_rate_limits": { 00:14:30.896 "rw_ios_per_sec": 0, 00:14:30.896 "rw_mbytes_per_sec": 0, 00:14:30.896 "r_mbytes_per_sec": 0, 00:14:30.896 "w_mbytes_per_sec": 0 00:14:30.896 }, 00:14:30.896 "claimed": true, 00:14:30.896 "claim_type": "exclusive_write", 00:14:30.896 "zoned": false, 00:14:30.896 "supported_io_types": { 00:14:30.896 "read": true, 00:14:30.896 "write": true, 00:14:30.896 "unmap": true, 00:14:30.896 "flush": true, 00:14:30.896 "reset": true, 00:14:30.896 "nvme_admin": false, 00:14:30.896 "nvme_io": false, 00:14:30.896 "nvme_io_md": false, 00:14:30.896 "write_zeroes": true, 00:14:30.896 "zcopy": true, 00:14:30.896 "get_zone_info": false, 00:14:30.896 "zone_management": false, 00:14:30.896 "zone_append": false, 00:14:30.896 "compare": false, 00:14:30.896 "compare_and_write": false, 00:14:30.896 "abort": true, 00:14:30.896 "seek_hole": false, 00:14:30.896 "seek_data": false, 00:14:30.896 "copy": true, 00:14:30.896 "nvme_iov_md": false 00:14:30.896 }, 00:14:30.896 "memory_domains": [ 00:14:30.896 { 00:14:30.896 "dma_device_id": "system", 00:14:30.896 "dma_device_type": 1 00:14:30.896 }, 00:14:30.896 { 00:14:30.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.896 "dma_device_type": 2 00:14:30.896 } 00:14:30.896 ], 00:14:30.896 "driver_specific": {} 00:14:30.896 } 00:14:30.896 ] 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.896 "name": "Existed_Raid", 00:14:30.896 "uuid": "a26ca695-1b8d-4cf7-812c-69928ab4897b", 00:14:30.896 "strip_size_kb": 0, 00:14:30.896 "state": "configuring", 00:14:30.896 "raid_level": "raid1", 00:14:30.896 "superblock": true, 00:14:30.896 "num_base_bdevs": 3, 00:14:30.896 "num_base_bdevs_discovered": 1, 00:14:30.896 "num_base_bdevs_operational": 3, 00:14:30.896 "base_bdevs_list": [ 00:14:30.896 { 00:14:30.896 "name": "BaseBdev1", 00:14:30.896 "uuid": "da11e9ea-f139-4f2a-aa0c-c736ded87c2e", 00:14:30.896 "is_configured": true, 00:14:30.896 "data_offset": 2048, 00:14:30.896 "data_size": 63488 00:14:30.896 }, 00:14:30.896 { 00:14:30.896 "name": "BaseBdev2", 00:14:30.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.896 "is_configured": false, 00:14:30.896 "data_offset": 0, 00:14:30.896 "data_size": 0 00:14:30.896 }, 00:14:30.896 { 00:14:30.896 "name": "BaseBdev3", 00:14:30.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.896 "is_configured": false, 00:14:30.896 "data_offset": 0, 00:14:30.896 "data_size": 0 00:14:30.896 } 00:14:30.896 ] 00:14:30.896 }' 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.896 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.464 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:31.464 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.464 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.464 [2024-12-06 06:40:49.845472] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:31.464 [2024-12-06 06:40:49.845611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:31.464 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.464 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:31.464 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.464 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.464 [2024-12-06 06:40:49.853450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:31.464 [2024-12-06 06:40:49.856627] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:31.464 [2024-12-06 06:40:49.856693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:31.464 [2024-12-06 06:40:49.856711] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:31.464 [2024-12-06 06:40:49.856728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:31.464 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.464 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:31.464 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:31.464 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:31.464 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.464 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.464 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.464 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.464 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.464 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.464 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.464 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.464 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.464 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.464 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.464 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.464 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.464 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.464 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.464 "name": "Existed_Raid", 00:14:31.464 "uuid": "f51257c6-694f-4d05-847f-6405ec18e244", 00:14:31.464 "strip_size_kb": 0, 00:14:31.464 "state": "configuring", 00:14:31.464 "raid_level": "raid1", 00:14:31.464 "superblock": true, 00:14:31.464 "num_base_bdevs": 3, 00:14:31.464 "num_base_bdevs_discovered": 1, 00:14:31.464 "num_base_bdevs_operational": 3, 00:14:31.464 "base_bdevs_list": [ 00:14:31.464 { 00:14:31.464 "name": "BaseBdev1", 00:14:31.464 "uuid": "da11e9ea-f139-4f2a-aa0c-c736ded87c2e", 00:14:31.464 "is_configured": true, 00:14:31.464 "data_offset": 2048, 00:14:31.464 "data_size": 63488 00:14:31.464 }, 00:14:31.464 { 00:14:31.464 "name": "BaseBdev2", 00:14:31.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.464 "is_configured": false, 00:14:31.464 "data_offset": 0, 00:14:31.464 "data_size": 0 00:14:31.464 }, 00:14:31.464 { 00:14:31.464 "name": "BaseBdev3", 00:14:31.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.464 "is_configured": false, 00:14:31.464 "data_offset": 0, 00:14:31.464 "data_size": 0 00:14:31.464 } 00:14:31.464 ] 00:14:31.464 }' 00:14:31.464 06:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.464 06:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.722 06:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:31.722 06:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.723 06:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.981 [2024-12-06 06:40:50.404084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:31.981 BaseBdev2 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.981 [ 00:14:31.981 { 00:14:31.981 "name": "BaseBdev2", 00:14:31.981 "aliases": [ 00:14:31.981 "2aaee8be-4c23-4495-aed6-9047a765a2bc" 00:14:31.981 ], 00:14:31.981 "product_name": "Malloc disk", 00:14:31.981 "block_size": 512, 00:14:31.981 "num_blocks": 65536, 00:14:31.981 "uuid": "2aaee8be-4c23-4495-aed6-9047a765a2bc", 00:14:31.981 "assigned_rate_limits": { 00:14:31.981 "rw_ios_per_sec": 0, 00:14:31.981 "rw_mbytes_per_sec": 0, 00:14:31.981 "r_mbytes_per_sec": 0, 00:14:31.981 "w_mbytes_per_sec": 0 00:14:31.981 }, 00:14:31.981 "claimed": true, 00:14:31.981 "claim_type": "exclusive_write", 00:14:31.981 "zoned": false, 00:14:31.981 "supported_io_types": { 00:14:31.981 "read": true, 00:14:31.981 "write": true, 00:14:31.981 "unmap": true, 00:14:31.981 "flush": true, 00:14:31.981 "reset": true, 00:14:31.981 "nvme_admin": false, 00:14:31.981 "nvme_io": false, 00:14:31.981 "nvme_io_md": false, 00:14:31.981 "write_zeroes": true, 00:14:31.981 "zcopy": true, 00:14:31.981 "get_zone_info": false, 00:14:31.981 "zone_management": false, 00:14:31.981 "zone_append": false, 00:14:31.981 "compare": false, 00:14:31.981 "compare_and_write": false, 00:14:31.981 "abort": true, 00:14:31.981 "seek_hole": false, 00:14:31.981 "seek_data": false, 00:14:31.981 "copy": true, 00:14:31.981 "nvme_iov_md": false 00:14:31.981 }, 00:14:31.981 "memory_domains": [ 00:14:31.981 { 00:14:31.981 "dma_device_id": "system", 00:14:31.981 "dma_device_type": 1 00:14:31.981 }, 00:14:31.981 { 00:14:31.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.981 "dma_device_type": 2 00:14:31.981 } 00:14:31.981 ], 00:14:31.981 "driver_specific": {} 00:14:31.981 } 00:14:31.981 ] 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.981 "name": "Existed_Raid", 00:14:31.981 "uuid": "f51257c6-694f-4d05-847f-6405ec18e244", 00:14:31.981 "strip_size_kb": 0, 00:14:31.981 "state": "configuring", 00:14:31.981 "raid_level": "raid1", 00:14:31.981 "superblock": true, 00:14:31.981 "num_base_bdevs": 3, 00:14:31.981 "num_base_bdevs_discovered": 2, 00:14:31.981 "num_base_bdevs_operational": 3, 00:14:31.981 "base_bdevs_list": [ 00:14:31.981 { 00:14:31.981 "name": "BaseBdev1", 00:14:31.981 "uuid": "da11e9ea-f139-4f2a-aa0c-c736ded87c2e", 00:14:31.981 "is_configured": true, 00:14:31.981 "data_offset": 2048, 00:14:31.981 "data_size": 63488 00:14:31.981 }, 00:14:31.981 { 00:14:31.981 "name": "BaseBdev2", 00:14:31.981 "uuid": "2aaee8be-4c23-4495-aed6-9047a765a2bc", 00:14:31.981 "is_configured": true, 00:14:31.981 "data_offset": 2048, 00:14:31.981 "data_size": 63488 00:14:31.981 }, 00:14:31.981 { 00:14:31.981 "name": "BaseBdev3", 00:14:31.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.981 "is_configured": false, 00:14:31.981 "data_offset": 0, 00:14:31.981 "data_size": 0 00:14:31.981 } 00:14:31.981 ] 00:14:31.981 }' 00:14:31.981 06:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.982 06:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.549 06:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:32.549 06:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.549 06:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.549 [2024-12-06 06:40:51.017621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:32.549 [2024-12-06 06:40:51.018004] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:32.549 [2024-12-06 06:40:51.018036] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:32.549 BaseBdev3 00:14:32.549 [2024-12-06 06:40:51.018408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:32.549 [2024-12-06 06:40:51.018671] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:32.549 [2024-12-06 06:40:51.018689] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:32.549 [2024-12-06 06:40:51.018898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.549 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.549 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:32.549 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:32.549 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:32.549 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:32.549 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:32.549 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:32.549 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:32.549 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.549 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.549 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.549 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:32.549 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.549 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.549 [ 00:14:32.549 { 00:14:32.549 "name": "BaseBdev3", 00:14:32.549 "aliases": [ 00:14:32.549 "72df1daf-2054-4678-b703-6abad25c8067" 00:14:32.549 ], 00:14:32.549 "product_name": "Malloc disk", 00:14:32.549 "block_size": 512, 00:14:32.549 "num_blocks": 65536, 00:14:32.549 "uuid": "72df1daf-2054-4678-b703-6abad25c8067", 00:14:32.549 "assigned_rate_limits": { 00:14:32.549 "rw_ios_per_sec": 0, 00:14:32.549 "rw_mbytes_per_sec": 0, 00:14:32.549 "r_mbytes_per_sec": 0, 00:14:32.549 "w_mbytes_per_sec": 0 00:14:32.549 }, 00:14:32.549 "claimed": true, 00:14:32.549 "claim_type": "exclusive_write", 00:14:32.549 "zoned": false, 00:14:32.549 "supported_io_types": { 00:14:32.549 "read": true, 00:14:32.549 "write": true, 00:14:32.549 "unmap": true, 00:14:32.549 "flush": true, 00:14:32.549 "reset": true, 00:14:32.549 "nvme_admin": false, 00:14:32.549 "nvme_io": false, 00:14:32.549 "nvme_io_md": false, 00:14:32.549 "write_zeroes": true, 00:14:32.549 "zcopy": true, 00:14:32.549 "get_zone_info": false, 00:14:32.549 "zone_management": false, 00:14:32.549 "zone_append": false, 00:14:32.549 "compare": false, 00:14:32.549 "compare_and_write": false, 00:14:32.549 "abort": true, 00:14:32.549 "seek_hole": false, 00:14:32.549 "seek_data": false, 00:14:32.549 "copy": true, 00:14:32.549 "nvme_iov_md": false 00:14:32.549 }, 00:14:32.549 "memory_domains": [ 00:14:32.549 { 00:14:32.549 "dma_device_id": "system", 00:14:32.549 "dma_device_type": 1 00:14:32.549 }, 00:14:32.549 { 00:14:32.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.549 "dma_device_type": 2 00:14:32.549 } 00:14:32.549 ], 00:14:32.549 "driver_specific": {} 00:14:32.549 } 00:14:32.549 ] 00:14:32.549 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.550 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:32.550 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:32.550 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:32.550 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:32.550 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.550 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.550 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:32.550 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:32.550 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.550 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.550 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.550 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.550 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.550 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.550 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.550 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.550 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.550 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.550 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.550 "name": "Existed_Raid", 00:14:32.550 "uuid": "f51257c6-694f-4d05-847f-6405ec18e244", 00:14:32.550 "strip_size_kb": 0, 00:14:32.550 "state": "online", 00:14:32.550 "raid_level": "raid1", 00:14:32.550 "superblock": true, 00:14:32.550 "num_base_bdevs": 3, 00:14:32.550 "num_base_bdevs_discovered": 3, 00:14:32.550 "num_base_bdevs_operational": 3, 00:14:32.550 "base_bdevs_list": [ 00:14:32.550 { 00:14:32.550 "name": "BaseBdev1", 00:14:32.550 "uuid": "da11e9ea-f139-4f2a-aa0c-c736ded87c2e", 00:14:32.550 "is_configured": true, 00:14:32.550 "data_offset": 2048, 00:14:32.550 "data_size": 63488 00:14:32.550 }, 00:14:32.550 { 00:14:32.550 "name": "BaseBdev2", 00:14:32.550 "uuid": "2aaee8be-4c23-4495-aed6-9047a765a2bc", 00:14:32.550 "is_configured": true, 00:14:32.550 "data_offset": 2048, 00:14:32.550 "data_size": 63488 00:14:32.550 }, 00:14:32.550 { 00:14:32.550 "name": "BaseBdev3", 00:14:32.550 "uuid": "72df1daf-2054-4678-b703-6abad25c8067", 00:14:32.550 "is_configured": true, 00:14:32.550 "data_offset": 2048, 00:14:32.550 "data_size": 63488 00:14:32.550 } 00:14:32.550 ] 00:14:32.550 }' 00:14:32.550 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.550 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.116 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:33.116 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:33.116 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:33.116 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:33.116 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:33.116 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:33.116 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:33.116 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:33.116 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.116 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.116 [2024-12-06 06:40:51.586271] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:33.116 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.116 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:33.116 "name": "Existed_Raid", 00:14:33.116 "aliases": [ 00:14:33.116 "f51257c6-694f-4d05-847f-6405ec18e244" 00:14:33.116 ], 00:14:33.116 "product_name": "Raid Volume", 00:14:33.116 "block_size": 512, 00:14:33.116 "num_blocks": 63488, 00:14:33.116 "uuid": "f51257c6-694f-4d05-847f-6405ec18e244", 00:14:33.116 "assigned_rate_limits": { 00:14:33.116 "rw_ios_per_sec": 0, 00:14:33.116 "rw_mbytes_per_sec": 0, 00:14:33.116 "r_mbytes_per_sec": 0, 00:14:33.116 "w_mbytes_per_sec": 0 00:14:33.116 }, 00:14:33.116 "claimed": false, 00:14:33.116 "zoned": false, 00:14:33.116 "supported_io_types": { 00:14:33.116 "read": true, 00:14:33.116 "write": true, 00:14:33.116 "unmap": false, 00:14:33.116 "flush": false, 00:14:33.116 "reset": true, 00:14:33.116 "nvme_admin": false, 00:14:33.116 "nvme_io": false, 00:14:33.116 "nvme_io_md": false, 00:14:33.116 "write_zeroes": true, 00:14:33.116 "zcopy": false, 00:14:33.116 "get_zone_info": false, 00:14:33.116 "zone_management": false, 00:14:33.116 "zone_append": false, 00:14:33.116 "compare": false, 00:14:33.116 "compare_and_write": false, 00:14:33.116 "abort": false, 00:14:33.116 "seek_hole": false, 00:14:33.116 "seek_data": false, 00:14:33.116 "copy": false, 00:14:33.116 "nvme_iov_md": false 00:14:33.116 }, 00:14:33.116 "memory_domains": [ 00:14:33.116 { 00:14:33.116 "dma_device_id": "system", 00:14:33.116 "dma_device_type": 1 00:14:33.116 }, 00:14:33.116 { 00:14:33.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.116 "dma_device_type": 2 00:14:33.116 }, 00:14:33.116 { 00:14:33.116 "dma_device_id": "system", 00:14:33.116 "dma_device_type": 1 00:14:33.116 }, 00:14:33.116 { 00:14:33.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.116 "dma_device_type": 2 00:14:33.116 }, 00:14:33.116 { 00:14:33.116 "dma_device_id": "system", 00:14:33.116 "dma_device_type": 1 00:14:33.116 }, 00:14:33.116 { 00:14:33.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.116 "dma_device_type": 2 00:14:33.116 } 00:14:33.116 ], 00:14:33.116 "driver_specific": { 00:14:33.116 "raid": { 00:14:33.116 "uuid": "f51257c6-694f-4d05-847f-6405ec18e244", 00:14:33.116 "strip_size_kb": 0, 00:14:33.116 "state": "online", 00:14:33.116 "raid_level": "raid1", 00:14:33.116 "superblock": true, 00:14:33.116 "num_base_bdevs": 3, 00:14:33.116 "num_base_bdevs_discovered": 3, 00:14:33.116 "num_base_bdevs_operational": 3, 00:14:33.116 "base_bdevs_list": [ 00:14:33.116 { 00:14:33.116 "name": "BaseBdev1", 00:14:33.116 "uuid": "da11e9ea-f139-4f2a-aa0c-c736ded87c2e", 00:14:33.116 "is_configured": true, 00:14:33.116 "data_offset": 2048, 00:14:33.116 "data_size": 63488 00:14:33.116 }, 00:14:33.116 { 00:14:33.116 "name": "BaseBdev2", 00:14:33.116 "uuid": "2aaee8be-4c23-4495-aed6-9047a765a2bc", 00:14:33.116 "is_configured": true, 00:14:33.116 "data_offset": 2048, 00:14:33.116 "data_size": 63488 00:14:33.116 }, 00:14:33.116 { 00:14:33.116 "name": "BaseBdev3", 00:14:33.116 "uuid": "72df1daf-2054-4678-b703-6abad25c8067", 00:14:33.116 "is_configured": true, 00:14:33.116 "data_offset": 2048, 00:14:33.116 "data_size": 63488 00:14:33.116 } 00:14:33.116 ] 00:14:33.116 } 00:14:33.116 } 00:14:33.116 }' 00:14:33.116 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:33.116 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:33.116 BaseBdev2 00:14:33.116 BaseBdev3' 00:14:33.116 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:33.116 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:33.116 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:33.116 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:33.116 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.116 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:33.116 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.116 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.375 [2024-12-06 06:40:51.861957] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:33.375 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:33.376 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:33.376 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.376 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.376 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.376 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.376 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.376 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.376 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.376 06:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.376 06:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.376 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.376 "name": "Existed_Raid", 00:14:33.376 "uuid": "f51257c6-694f-4d05-847f-6405ec18e244", 00:14:33.376 "strip_size_kb": 0, 00:14:33.376 "state": "online", 00:14:33.376 "raid_level": "raid1", 00:14:33.376 "superblock": true, 00:14:33.376 "num_base_bdevs": 3, 00:14:33.376 "num_base_bdevs_discovered": 2, 00:14:33.376 "num_base_bdevs_operational": 2, 00:14:33.376 "base_bdevs_list": [ 00:14:33.376 { 00:14:33.376 "name": null, 00:14:33.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.376 "is_configured": false, 00:14:33.376 "data_offset": 0, 00:14:33.376 "data_size": 63488 00:14:33.376 }, 00:14:33.376 { 00:14:33.376 "name": "BaseBdev2", 00:14:33.376 "uuid": "2aaee8be-4c23-4495-aed6-9047a765a2bc", 00:14:33.376 "is_configured": true, 00:14:33.376 "data_offset": 2048, 00:14:33.376 "data_size": 63488 00:14:33.376 }, 00:14:33.376 { 00:14:33.376 "name": "BaseBdev3", 00:14:33.376 "uuid": "72df1daf-2054-4678-b703-6abad25c8067", 00:14:33.376 "is_configured": true, 00:14:33.376 "data_offset": 2048, 00:14:33.376 "data_size": 63488 00:14:33.376 } 00:14:33.376 ] 00:14:33.376 }' 00:14:33.376 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.376 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.013 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:34.013 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:34.013 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.013 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:34.013 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.013 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.013 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.013 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:34.013 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:34.013 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:34.014 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.014 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.014 [2024-12-06 06:40:52.526699] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:34.014 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.014 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:34.014 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:34.014 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.014 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.014 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.014 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:34.014 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.272 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:34.272 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:34.272 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:34.272 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.272 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.272 [2024-12-06 06:40:52.671681] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:34.272 [2024-12-06 06:40:52.671973] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:34.272 [2024-12-06 06:40:52.758714] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:34.272 [2024-12-06 06:40:52.758985] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:34.272 [2024-12-06 06:40:52.759021] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:34.272 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.272 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:34.272 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:34.272 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.272 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.272 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.272 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:34.272 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.272 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:34.272 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:34.272 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:34.272 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:34.272 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:34.272 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:34.272 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.272 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.272 BaseBdev2 00:14:34.272 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.272 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:34.273 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:34.273 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:34.273 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:34.273 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:34.273 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:34.273 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:34.273 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.273 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.273 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.273 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:34.273 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.273 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.273 [ 00:14:34.273 { 00:14:34.273 "name": "BaseBdev2", 00:14:34.273 "aliases": [ 00:14:34.273 "3886557d-a1c6-4204-8ae7-a06d90a0f63a" 00:14:34.273 ], 00:14:34.273 "product_name": "Malloc disk", 00:14:34.273 "block_size": 512, 00:14:34.273 "num_blocks": 65536, 00:14:34.273 "uuid": "3886557d-a1c6-4204-8ae7-a06d90a0f63a", 00:14:34.273 "assigned_rate_limits": { 00:14:34.273 "rw_ios_per_sec": 0, 00:14:34.273 "rw_mbytes_per_sec": 0, 00:14:34.273 "r_mbytes_per_sec": 0, 00:14:34.273 "w_mbytes_per_sec": 0 00:14:34.273 }, 00:14:34.273 "claimed": false, 00:14:34.273 "zoned": false, 00:14:34.273 "supported_io_types": { 00:14:34.273 "read": true, 00:14:34.273 "write": true, 00:14:34.273 "unmap": true, 00:14:34.273 "flush": true, 00:14:34.273 "reset": true, 00:14:34.273 "nvme_admin": false, 00:14:34.273 "nvme_io": false, 00:14:34.273 "nvme_io_md": false, 00:14:34.273 "write_zeroes": true, 00:14:34.273 "zcopy": true, 00:14:34.273 "get_zone_info": false, 00:14:34.273 "zone_management": false, 00:14:34.273 "zone_append": false, 00:14:34.273 "compare": false, 00:14:34.273 "compare_and_write": false, 00:14:34.273 "abort": true, 00:14:34.273 "seek_hole": false, 00:14:34.273 "seek_data": false, 00:14:34.273 "copy": true, 00:14:34.273 "nvme_iov_md": false 00:14:34.273 }, 00:14:34.273 "memory_domains": [ 00:14:34.273 { 00:14:34.273 "dma_device_id": "system", 00:14:34.273 "dma_device_type": 1 00:14:34.273 }, 00:14:34.273 { 00:14:34.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.273 "dma_device_type": 2 00:14:34.273 } 00:14:34.273 ], 00:14:34.273 "driver_specific": {} 00:14:34.273 } 00:14:34.273 ] 00:14:34.273 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.273 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:34.273 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:34.273 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:34.273 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:34.273 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.273 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.532 BaseBdev3 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.532 [ 00:14:34.532 { 00:14:34.532 "name": "BaseBdev3", 00:14:34.532 "aliases": [ 00:14:34.532 "16759cb5-1a8e-4106-ace6-aadb9d654f57" 00:14:34.532 ], 00:14:34.532 "product_name": "Malloc disk", 00:14:34.532 "block_size": 512, 00:14:34.532 "num_blocks": 65536, 00:14:34.532 "uuid": "16759cb5-1a8e-4106-ace6-aadb9d654f57", 00:14:34.532 "assigned_rate_limits": { 00:14:34.532 "rw_ios_per_sec": 0, 00:14:34.532 "rw_mbytes_per_sec": 0, 00:14:34.532 "r_mbytes_per_sec": 0, 00:14:34.532 "w_mbytes_per_sec": 0 00:14:34.532 }, 00:14:34.532 "claimed": false, 00:14:34.532 "zoned": false, 00:14:34.532 "supported_io_types": { 00:14:34.532 "read": true, 00:14:34.532 "write": true, 00:14:34.532 "unmap": true, 00:14:34.532 "flush": true, 00:14:34.532 "reset": true, 00:14:34.532 "nvme_admin": false, 00:14:34.532 "nvme_io": false, 00:14:34.532 "nvme_io_md": false, 00:14:34.532 "write_zeroes": true, 00:14:34.532 "zcopy": true, 00:14:34.532 "get_zone_info": false, 00:14:34.532 "zone_management": false, 00:14:34.532 "zone_append": false, 00:14:34.532 "compare": false, 00:14:34.532 "compare_and_write": false, 00:14:34.532 "abort": true, 00:14:34.532 "seek_hole": false, 00:14:34.532 "seek_data": false, 00:14:34.532 "copy": true, 00:14:34.532 "nvme_iov_md": false 00:14:34.532 }, 00:14:34.532 "memory_domains": [ 00:14:34.532 { 00:14:34.532 "dma_device_id": "system", 00:14:34.532 "dma_device_type": 1 00:14:34.532 }, 00:14:34.532 { 00:14:34.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.532 "dma_device_type": 2 00:14:34.532 } 00:14:34.532 ], 00:14:34.532 "driver_specific": {} 00:14:34.532 } 00:14:34.532 ] 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.532 [2024-12-06 06:40:52.972110] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:34.532 [2024-12-06 06:40:52.972172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:34.532 [2024-12-06 06:40:52.972200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:34.532 [2024-12-06 06:40:52.974670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.532 06:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.532 06:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.532 06:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.532 "name": "Existed_Raid", 00:14:34.532 "uuid": "9cba8c4d-78b4-477b-b7ed-455edc07b737", 00:14:34.532 "strip_size_kb": 0, 00:14:34.532 "state": "configuring", 00:14:34.532 "raid_level": "raid1", 00:14:34.532 "superblock": true, 00:14:34.532 "num_base_bdevs": 3, 00:14:34.532 "num_base_bdevs_discovered": 2, 00:14:34.532 "num_base_bdevs_operational": 3, 00:14:34.533 "base_bdevs_list": [ 00:14:34.533 { 00:14:34.533 "name": "BaseBdev1", 00:14:34.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.533 "is_configured": false, 00:14:34.533 "data_offset": 0, 00:14:34.533 "data_size": 0 00:14:34.533 }, 00:14:34.533 { 00:14:34.533 "name": "BaseBdev2", 00:14:34.533 "uuid": "3886557d-a1c6-4204-8ae7-a06d90a0f63a", 00:14:34.533 "is_configured": true, 00:14:34.533 "data_offset": 2048, 00:14:34.533 "data_size": 63488 00:14:34.533 }, 00:14:34.533 { 00:14:34.533 "name": "BaseBdev3", 00:14:34.533 "uuid": "16759cb5-1a8e-4106-ace6-aadb9d654f57", 00:14:34.533 "is_configured": true, 00:14:34.533 "data_offset": 2048, 00:14:34.533 "data_size": 63488 00:14:34.533 } 00:14:34.533 ] 00:14:34.533 }' 00:14:34.533 06:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.533 06:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.099 06:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:35.099 06:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.099 06:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.099 [2024-12-06 06:40:53.500297] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:35.099 06:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.099 06:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:35.099 06:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.099 06:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.099 06:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:35.099 06:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:35.099 06:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.099 06:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.099 06:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.099 06:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.099 06:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.099 06:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.099 06:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.099 06:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.099 06:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.100 06:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.100 06:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.100 "name": "Existed_Raid", 00:14:35.100 "uuid": "9cba8c4d-78b4-477b-b7ed-455edc07b737", 00:14:35.100 "strip_size_kb": 0, 00:14:35.100 "state": "configuring", 00:14:35.100 "raid_level": "raid1", 00:14:35.100 "superblock": true, 00:14:35.100 "num_base_bdevs": 3, 00:14:35.100 "num_base_bdevs_discovered": 1, 00:14:35.100 "num_base_bdevs_operational": 3, 00:14:35.100 "base_bdevs_list": [ 00:14:35.100 { 00:14:35.100 "name": "BaseBdev1", 00:14:35.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.100 "is_configured": false, 00:14:35.100 "data_offset": 0, 00:14:35.100 "data_size": 0 00:14:35.100 }, 00:14:35.100 { 00:14:35.100 "name": null, 00:14:35.100 "uuid": "3886557d-a1c6-4204-8ae7-a06d90a0f63a", 00:14:35.100 "is_configured": false, 00:14:35.100 "data_offset": 0, 00:14:35.100 "data_size": 63488 00:14:35.100 }, 00:14:35.100 { 00:14:35.100 "name": "BaseBdev3", 00:14:35.100 "uuid": "16759cb5-1a8e-4106-ace6-aadb9d654f57", 00:14:35.100 "is_configured": true, 00:14:35.100 "data_offset": 2048, 00:14:35.100 "data_size": 63488 00:14:35.100 } 00:14:35.100 ] 00:14:35.100 }' 00:14:35.100 06:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.100 06:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.358 06:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.358 06:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.358 06:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.358 06:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:35.617 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.617 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:35.617 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:35.617 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.617 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.617 [2024-12-06 06:40:54.077158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:35.617 BaseBdev1 00:14:35.617 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.617 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:35.617 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:35.617 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:35.617 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:35.617 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:35.617 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:35.617 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:35.617 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.617 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.617 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.617 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:35.617 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.617 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.617 [ 00:14:35.617 { 00:14:35.617 "name": "BaseBdev1", 00:14:35.617 "aliases": [ 00:14:35.617 "0d3b40f2-8151-427e-ba73-57849bdf0f23" 00:14:35.617 ], 00:14:35.617 "product_name": "Malloc disk", 00:14:35.617 "block_size": 512, 00:14:35.617 "num_blocks": 65536, 00:14:35.617 "uuid": "0d3b40f2-8151-427e-ba73-57849bdf0f23", 00:14:35.617 "assigned_rate_limits": { 00:14:35.617 "rw_ios_per_sec": 0, 00:14:35.617 "rw_mbytes_per_sec": 0, 00:14:35.617 "r_mbytes_per_sec": 0, 00:14:35.617 "w_mbytes_per_sec": 0 00:14:35.617 }, 00:14:35.617 "claimed": true, 00:14:35.617 "claim_type": "exclusive_write", 00:14:35.617 "zoned": false, 00:14:35.617 "supported_io_types": { 00:14:35.617 "read": true, 00:14:35.617 "write": true, 00:14:35.617 "unmap": true, 00:14:35.617 "flush": true, 00:14:35.617 "reset": true, 00:14:35.617 "nvme_admin": false, 00:14:35.617 "nvme_io": false, 00:14:35.617 "nvme_io_md": false, 00:14:35.617 "write_zeroes": true, 00:14:35.617 "zcopy": true, 00:14:35.617 "get_zone_info": false, 00:14:35.617 "zone_management": false, 00:14:35.617 "zone_append": false, 00:14:35.617 "compare": false, 00:14:35.618 "compare_and_write": false, 00:14:35.618 "abort": true, 00:14:35.618 "seek_hole": false, 00:14:35.618 "seek_data": false, 00:14:35.618 "copy": true, 00:14:35.618 "nvme_iov_md": false 00:14:35.618 }, 00:14:35.618 "memory_domains": [ 00:14:35.618 { 00:14:35.618 "dma_device_id": "system", 00:14:35.618 "dma_device_type": 1 00:14:35.618 }, 00:14:35.618 { 00:14:35.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.618 "dma_device_type": 2 00:14:35.618 } 00:14:35.618 ], 00:14:35.618 "driver_specific": {} 00:14:35.618 } 00:14:35.618 ] 00:14:35.618 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.618 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:35.618 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:35.618 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.618 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.618 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:35.618 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:35.618 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.618 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.618 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.618 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.618 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.618 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.618 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.618 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.618 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.618 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.618 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.618 "name": "Existed_Raid", 00:14:35.618 "uuid": "9cba8c4d-78b4-477b-b7ed-455edc07b737", 00:14:35.618 "strip_size_kb": 0, 00:14:35.618 "state": "configuring", 00:14:35.618 "raid_level": "raid1", 00:14:35.618 "superblock": true, 00:14:35.618 "num_base_bdevs": 3, 00:14:35.618 "num_base_bdevs_discovered": 2, 00:14:35.618 "num_base_bdevs_operational": 3, 00:14:35.618 "base_bdevs_list": [ 00:14:35.618 { 00:14:35.618 "name": "BaseBdev1", 00:14:35.618 "uuid": "0d3b40f2-8151-427e-ba73-57849bdf0f23", 00:14:35.618 "is_configured": true, 00:14:35.618 "data_offset": 2048, 00:14:35.618 "data_size": 63488 00:14:35.618 }, 00:14:35.618 { 00:14:35.618 "name": null, 00:14:35.618 "uuid": "3886557d-a1c6-4204-8ae7-a06d90a0f63a", 00:14:35.618 "is_configured": false, 00:14:35.618 "data_offset": 0, 00:14:35.618 "data_size": 63488 00:14:35.618 }, 00:14:35.618 { 00:14:35.618 "name": "BaseBdev3", 00:14:35.618 "uuid": "16759cb5-1a8e-4106-ace6-aadb9d654f57", 00:14:35.618 "is_configured": true, 00:14:35.618 "data_offset": 2048, 00:14:35.618 "data_size": 63488 00:14:35.618 } 00:14:35.618 ] 00:14:35.618 }' 00:14:35.618 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.618 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.185 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:36.185 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.185 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.185 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.185 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.185 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:36.185 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:36.185 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.185 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.185 [2024-12-06 06:40:54.645467] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:36.185 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.185 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:36.185 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.185 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.185 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.185 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.185 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.186 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.186 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.186 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.186 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.186 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.186 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.186 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.186 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.186 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.186 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.186 "name": "Existed_Raid", 00:14:36.186 "uuid": "9cba8c4d-78b4-477b-b7ed-455edc07b737", 00:14:36.186 "strip_size_kb": 0, 00:14:36.186 "state": "configuring", 00:14:36.186 "raid_level": "raid1", 00:14:36.186 "superblock": true, 00:14:36.186 "num_base_bdevs": 3, 00:14:36.186 "num_base_bdevs_discovered": 1, 00:14:36.186 "num_base_bdevs_operational": 3, 00:14:36.186 "base_bdevs_list": [ 00:14:36.186 { 00:14:36.186 "name": "BaseBdev1", 00:14:36.186 "uuid": "0d3b40f2-8151-427e-ba73-57849bdf0f23", 00:14:36.186 "is_configured": true, 00:14:36.186 "data_offset": 2048, 00:14:36.186 "data_size": 63488 00:14:36.186 }, 00:14:36.186 { 00:14:36.186 "name": null, 00:14:36.186 "uuid": "3886557d-a1c6-4204-8ae7-a06d90a0f63a", 00:14:36.186 "is_configured": false, 00:14:36.186 "data_offset": 0, 00:14:36.186 "data_size": 63488 00:14:36.186 }, 00:14:36.186 { 00:14:36.186 "name": null, 00:14:36.186 "uuid": "16759cb5-1a8e-4106-ace6-aadb9d654f57", 00:14:36.186 "is_configured": false, 00:14:36.186 "data_offset": 0, 00:14:36.186 "data_size": 63488 00:14:36.186 } 00:14:36.186 ] 00:14:36.186 }' 00:14:36.186 06:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.186 06:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.751 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.751 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:36.751 06:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.751 06:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.751 06:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.751 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:36.751 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:36.751 06:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.751 06:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.751 [2024-12-06 06:40:55.177701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:36.751 06:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.751 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:36.751 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.751 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.751 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.751 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.751 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.751 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.751 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.751 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.751 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.751 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.751 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.751 06:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.751 06:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.751 06:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.751 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.751 "name": "Existed_Raid", 00:14:36.751 "uuid": "9cba8c4d-78b4-477b-b7ed-455edc07b737", 00:14:36.751 "strip_size_kb": 0, 00:14:36.751 "state": "configuring", 00:14:36.751 "raid_level": "raid1", 00:14:36.751 "superblock": true, 00:14:36.751 "num_base_bdevs": 3, 00:14:36.751 "num_base_bdevs_discovered": 2, 00:14:36.751 "num_base_bdevs_operational": 3, 00:14:36.751 "base_bdevs_list": [ 00:14:36.751 { 00:14:36.751 "name": "BaseBdev1", 00:14:36.751 "uuid": "0d3b40f2-8151-427e-ba73-57849bdf0f23", 00:14:36.751 "is_configured": true, 00:14:36.751 "data_offset": 2048, 00:14:36.751 "data_size": 63488 00:14:36.751 }, 00:14:36.751 { 00:14:36.751 "name": null, 00:14:36.751 "uuid": "3886557d-a1c6-4204-8ae7-a06d90a0f63a", 00:14:36.751 "is_configured": false, 00:14:36.751 "data_offset": 0, 00:14:36.751 "data_size": 63488 00:14:36.751 }, 00:14:36.751 { 00:14:36.751 "name": "BaseBdev3", 00:14:36.751 "uuid": "16759cb5-1a8e-4106-ace6-aadb9d654f57", 00:14:36.751 "is_configured": true, 00:14:36.751 "data_offset": 2048, 00:14:36.751 "data_size": 63488 00:14:36.751 } 00:14:36.751 ] 00:14:36.751 }' 00:14:36.751 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.751 06:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.317 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.317 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:37.317 06:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.317 06:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.317 06:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.317 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:37.317 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:37.317 06:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.317 06:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.317 [2024-12-06 06:40:55.729815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:37.317 06:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.317 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:37.317 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.317 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.317 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.317 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.317 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.317 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.317 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.317 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.317 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.317 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.317 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.317 06:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.317 06:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.317 06:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.317 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.317 "name": "Existed_Raid", 00:14:37.317 "uuid": "9cba8c4d-78b4-477b-b7ed-455edc07b737", 00:14:37.317 "strip_size_kb": 0, 00:14:37.317 "state": "configuring", 00:14:37.317 "raid_level": "raid1", 00:14:37.317 "superblock": true, 00:14:37.317 "num_base_bdevs": 3, 00:14:37.317 "num_base_bdevs_discovered": 1, 00:14:37.317 "num_base_bdevs_operational": 3, 00:14:37.317 "base_bdevs_list": [ 00:14:37.317 { 00:14:37.317 "name": null, 00:14:37.317 "uuid": "0d3b40f2-8151-427e-ba73-57849bdf0f23", 00:14:37.317 "is_configured": false, 00:14:37.317 "data_offset": 0, 00:14:37.317 "data_size": 63488 00:14:37.317 }, 00:14:37.317 { 00:14:37.317 "name": null, 00:14:37.317 "uuid": "3886557d-a1c6-4204-8ae7-a06d90a0f63a", 00:14:37.317 "is_configured": false, 00:14:37.317 "data_offset": 0, 00:14:37.317 "data_size": 63488 00:14:37.317 }, 00:14:37.317 { 00:14:37.317 "name": "BaseBdev3", 00:14:37.317 "uuid": "16759cb5-1a8e-4106-ace6-aadb9d654f57", 00:14:37.317 "is_configured": true, 00:14:37.317 "data_offset": 2048, 00:14:37.317 "data_size": 63488 00:14:37.317 } 00:14:37.317 ] 00:14:37.317 }' 00:14:37.317 06:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.317 06:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.883 06:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.883 06:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:37.883 06:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.883 06:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.883 06:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.883 06:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:37.883 06:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:37.883 06:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.883 06:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.883 [2024-12-06 06:40:56.357063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:37.883 06:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.883 06:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:37.883 06:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.883 06:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.883 06:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.883 06:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.883 06:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.883 06:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.883 06:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.883 06:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.883 06:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.883 06:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.883 06:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.883 06:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.883 06:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.883 06:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.883 06:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.883 "name": "Existed_Raid", 00:14:37.883 "uuid": "9cba8c4d-78b4-477b-b7ed-455edc07b737", 00:14:37.883 "strip_size_kb": 0, 00:14:37.883 "state": "configuring", 00:14:37.883 "raid_level": "raid1", 00:14:37.883 "superblock": true, 00:14:37.883 "num_base_bdevs": 3, 00:14:37.883 "num_base_bdevs_discovered": 2, 00:14:37.883 "num_base_bdevs_operational": 3, 00:14:37.883 "base_bdevs_list": [ 00:14:37.883 { 00:14:37.883 "name": null, 00:14:37.883 "uuid": "0d3b40f2-8151-427e-ba73-57849bdf0f23", 00:14:37.883 "is_configured": false, 00:14:37.883 "data_offset": 0, 00:14:37.883 "data_size": 63488 00:14:37.883 }, 00:14:37.883 { 00:14:37.883 "name": "BaseBdev2", 00:14:37.883 "uuid": "3886557d-a1c6-4204-8ae7-a06d90a0f63a", 00:14:37.883 "is_configured": true, 00:14:37.883 "data_offset": 2048, 00:14:37.883 "data_size": 63488 00:14:37.883 }, 00:14:37.883 { 00:14:37.883 "name": "BaseBdev3", 00:14:37.883 "uuid": "16759cb5-1a8e-4106-ace6-aadb9d654f57", 00:14:37.883 "is_configured": true, 00:14:37.883 "data_offset": 2048, 00:14:37.883 "data_size": 63488 00:14:37.883 } 00:14:37.883 ] 00:14:37.883 }' 00:14:37.883 06:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.883 06:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.544 06:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.544 06:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:38.544 06:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.544 06:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.544 06:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.544 06:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:38.544 06:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.544 06:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:38.544 06:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.544 06:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.544 06:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.544 06:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0d3b40f2-8151-427e-ba73-57849bdf0f23 00:14:38.544 06:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.544 06:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.544 [2024-12-06 06:40:56.999651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:38.544 [2024-12-06 06:40:57.000160] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:38.544 [2024-12-06 06:40:57.000185] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:38.544 NewBaseBdev 00:14:38.544 [2024-12-06 06:40:57.000497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:38.544 [2024-12-06 06:40:57.000706] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:38.544 [2024-12-06 06:40:57.000728] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:38.544 [2024-12-06 06:40:57.000891] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.545 06:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.545 06:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.545 [ 00:14:38.545 { 00:14:38.545 "name": "NewBaseBdev", 00:14:38.545 "aliases": [ 00:14:38.545 "0d3b40f2-8151-427e-ba73-57849bdf0f23" 00:14:38.545 ], 00:14:38.545 "product_name": "Malloc disk", 00:14:38.545 "block_size": 512, 00:14:38.545 "num_blocks": 65536, 00:14:38.545 "uuid": "0d3b40f2-8151-427e-ba73-57849bdf0f23", 00:14:38.545 "assigned_rate_limits": { 00:14:38.545 "rw_ios_per_sec": 0, 00:14:38.545 "rw_mbytes_per_sec": 0, 00:14:38.545 "r_mbytes_per_sec": 0, 00:14:38.545 "w_mbytes_per_sec": 0 00:14:38.545 }, 00:14:38.545 "claimed": true, 00:14:38.545 "claim_type": "exclusive_write", 00:14:38.545 "zoned": false, 00:14:38.545 "supported_io_types": { 00:14:38.545 "read": true, 00:14:38.545 "write": true, 00:14:38.545 "unmap": true, 00:14:38.545 "flush": true, 00:14:38.545 "reset": true, 00:14:38.545 "nvme_admin": false, 00:14:38.545 "nvme_io": false, 00:14:38.545 "nvme_io_md": false, 00:14:38.545 "write_zeroes": true, 00:14:38.545 "zcopy": true, 00:14:38.545 "get_zone_info": false, 00:14:38.545 "zone_management": false, 00:14:38.545 "zone_append": false, 00:14:38.545 "compare": false, 00:14:38.545 "compare_and_write": false, 00:14:38.545 "abort": true, 00:14:38.545 "seek_hole": false, 00:14:38.545 "seek_data": false, 00:14:38.545 "copy": true, 00:14:38.545 "nvme_iov_md": false 00:14:38.545 }, 00:14:38.545 "memory_domains": [ 00:14:38.545 { 00:14:38.545 "dma_device_id": "system", 00:14:38.545 "dma_device_type": 1 00:14:38.545 }, 00:14:38.545 { 00:14:38.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.545 "dma_device_type": 2 00:14:38.545 } 00:14:38.545 ], 00:14:38.545 "driver_specific": {} 00:14:38.545 } 00:14:38.545 ] 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.545 "name": "Existed_Raid", 00:14:38.545 "uuid": "9cba8c4d-78b4-477b-b7ed-455edc07b737", 00:14:38.545 "strip_size_kb": 0, 00:14:38.545 "state": "online", 00:14:38.545 "raid_level": "raid1", 00:14:38.545 "superblock": true, 00:14:38.545 "num_base_bdevs": 3, 00:14:38.545 "num_base_bdevs_discovered": 3, 00:14:38.545 "num_base_bdevs_operational": 3, 00:14:38.545 "base_bdevs_list": [ 00:14:38.545 { 00:14:38.545 "name": "NewBaseBdev", 00:14:38.545 "uuid": "0d3b40f2-8151-427e-ba73-57849bdf0f23", 00:14:38.545 "is_configured": true, 00:14:38.545 "data_offset": 2048, 00:14:38.545 "data_size": 63488 00:14:38.545 }, 00:14:38.545 { 00:14:38.545 "name": "BaseBdev2", 00:14:38.545 "uuid": "3886557d-a1c6-4204-8ae7-a06d90a0f63a", 00:14:38.545 "is_configured": true, 00:14:38.545 "data_offset": 2048, 00:14:38.545 "data_size": 63488 00:14:38.545 }, 00:14:38.545 { 00:14:38.545 "name": "BaseBdev3", 00:14:38.545 "uuid": "16759cb5-1a8e-4106-ace6-aadb9d654f57", 00:14:38.545 "is_configured": true, 00:14:38.545 "data_offset": 2048, 00:14:38.545 "data_size": 63488 00:14:38.545 } 00:14:38.545 ] 00:14:38.545 }' 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.545 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.111 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:39.111 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:39.111 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:39.111 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:39.111 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:39.111 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:39.111 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:39.111 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:39.111 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.111 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.111 [2024-12-06 06:40:57.524260] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.111 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.111 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:39.111 "name": "Existed_Raid", 00:14:39.111 "aliases": [ 00:14:39.111 "9cba8c4d-78b4-477b-b7ed-455edc07b737" 00:14:39.111 ], 00:14:39.111 "product_name": "Raid Volume", 00:14:39.111 "block_size": 512, 00:14:39.111 "num_blocks": 63488, 00:14:39.111 "uuid": "9cba8c4d-78b4-477b-b7ed-455edc07b737", 00:14:39.111 "assigned_rate_limits": { 00:14:39.111 "rw_ios_per_sec": 0, 00:14:39.111 "rw_mbytes_per_sec": 0, 00:14:39.111 "r_mbytes_per_sec": 0, 00:14:39.111 "w_mbytes_per_sec": 0 00:14:39.111 }, 00:14:39.111 "claimed": false, 00:14:39.111 "zoned": false, 00:14:39.111 "supported_io_types": { 00:14:39.111 "read": true, 00:14:39.111 "write": true, 00:14:39.111 "unmap": false, 00:14:39.111 "flush": false, 00:14:39.111 "reset": true, 00:14:39.111 "nvme_admin": false, 00:14:39.111 "nvme_io": false, 00:14:39.111 "nvme_io_md": false, 00:14:39.111 "write_zeroes": true, 00:14:39.111 "zcopy": false, 00:14:39.111 "get_zone_info": false, 00:14:39.111 "zone_management": false, 00:14:39.111 "zone_append": false, 00:14:39.111 "compare": false, 00:14:39.111 "compare_and_write": false, 00:14:39.111 "abort": false, 00:14:39.111 "seek_hole": false, 00:14:39.111 "seek_data": false, 00:14:39.111 "copy": false, 00:14:39.111 "nvme_iov_md": false 00:14:39.111 }, 00:14:39.111 "memory_domains": [ 00:14:39.111 { 00:14:39.111 "dma_device_id": "system", 00:14:39.111 "dma_device_type": 1 00:14:39.111 }, 00:14:39.111 { 00:14:39.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.111 "dma_device_type": 2 00:14:39.111 }, 00:14:39.111 { 00:14:39.111 "dma_device_id": "system", 00:14:39.111 "dma_device_type": 1 00:14:39.111 }, 00:14:39.111 { 00:14:39.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.111 "dma_device_type": 2 00:14:39.111 }, 00:14:39.111 { 00:14:39.111 "dma_device_id": "system", 00:14:39.111 "dma_device_type": 1 00:14:39.111 }, 00:14:39.111 { 00:14:39.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.111 "dma_device_type": 2 00:14:39.111 } 00:14:39.111 ], 00:14:39.111 "driver_specific": { 00:14:39.111 "raid": { 00:14:39.111 "uuid": "9cba8c4d-78b4-477b-b7ed-455edc07b737", 00:14:39.111 "strip_size_kb": 0, 00:14:39.111 "state": "online", 00:14:39.111 "raid_level": "raid1", 00:14:39.111 "superblock": true, 00:14:39.111 "num_base_bdevs": 3, 00:14:39.111 "num_base_bdevs_discovered": 3, 00:14:39.111 "num_base_bdevs_operational": 3, 00:14:39.111 "base_bdevs_list": [ 00:14:39.111 { 00:14:39.111 "name": "NewBaseBdev", 00:14:39.111 "uuid": "0d3b40f2-8151-427e-ba73-57849bdf0f23", 00:14:39.111 "is_configured": true, 00:14:39.111 "data_offset": 2048, 00:14:39.111 "data_size": 63488 00:14:39.111 }, 00:14:39.111 { 00:14:39.111 "name": "BaseBdev2", 00:14:39.111 "uuid": "3886557d-a1c6-4204-8ae7-a06d90a0f63a", 00:14:39.111 "is_configured": true, 00:14:39.111 "data_offset": 2048, 00:14:39.111 "data_size": 63488 00:14:39.111 }, 00:14:39.111 { 00:14:39.111 "name": "BaseBdev3", 00:14:39.111 "uuid": "16759cb5-1a8e-4106-ace6-aadb9d654f57", 00:14:39.111 "is_configured": true, 00:14:39.111 "data_offset": 2048, 00:14:39.111 "data_size": 63488 00:14:39.111 } 00:14:39.111 ] 00:14:39.111 } 00:14:39.111 } 00:14:39.111 }' 00:14:39.111 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:39.111 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:39.111 BaseBdev2 00:14:39.111 BaseBdev3' 00:14:39.111 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.111 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:39.111 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.111 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:39.111 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.111 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.111 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.111 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.111 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.111 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.111 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.111 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:39.111 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.111 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.111 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.369 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.369 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.369 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.369 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.369 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:39.369 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.369 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.369 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.369 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.369 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.369 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.369 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:39.369 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.369 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.369 [2024-12-06 06:40:57.855952] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:39.369 [2024-12-06 06:40:57.856007] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:39.369 [2024-12-06 06:40:57.856107] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.369 [2024-12-06 06:40:57.856482] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:39.369 [2024-12-06 06:40:57.856501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:39.369 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.369 06:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68278 00:14:39.369 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68278 ']' 00:14:39.369 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68278 00:14:39.369 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:39.369 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:39.369 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68278 00:14:39.369 killing process with pid 68278 00:14:39.369 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:39.369 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:39.369 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68278' 00:14:39.369 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68278 00:14:39.369 [2024-12-06 06:40:57.892686] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:39.369 06:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68278 00:14:39.627 [2024-12-06 06:40:58.161352] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:40.999 06:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:40.999 00:14:40.999 real 0m11.615s 00:14:40.999 user 0m19.168s 00:14:40.999 sys 0m1.638s 00:14:40.999 06:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:40.999 06:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.999 ************************************ 00:14:40.999 END TEST raid_state_function_test_sb 00:14:40.999 ************************************ 00:14:40.999 06:40:59 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:14:40.999 06:40:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:40.999 06:40:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:40.999 06:40:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:40.999 ************************************ 00:14:40.999 START TEST raid_superblock_test 00:14:40.999 ************************************ 00:14:40.999 06:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:14:40.999 06:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:14:40.999 06:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:40.999 06:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:40.999 06:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:40.999 06:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:40.999 06:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:40.999 06:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:40.999 06:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:40.999 06:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:40.999 06:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:40.999 06:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:40.999 06:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:40.999 06:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:40.999 06:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:14:40.999 06:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:14:40.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.999 06:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68909 00:14:40.999 06:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:40.999 06:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68909 00:14:40.999 06:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68909 ']' 00:14:40.999 06:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.999 06:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:40.999 06:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.999 06:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:40.999 06:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.999 [2024-12-06 06:40:59.360704] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:14:40.999 [2024-12-06 06:40:59.360915] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68909 ] 00:14:40.999 [2024-12-06 06:40:59.541842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.256 [2024-12-06 06:40:59.671700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.256 [2024-12-06 06:40:59.876356] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.256 [2024-12-06 06:40:59.876433] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.819 06:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:41.819 06:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:41.819 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:41.819 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:41.819 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:41.819 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:41.819 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:41.819 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:41.819 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:41.819 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:41.819 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:41.819 06:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.819 06:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.819 malloc1 00:14:41.819 06:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.819 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:41.819 06:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.819 06:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.819 [2024-12-06 06:41:00.453191] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:41.819 [2024-12-06 06:41:00.453279] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.819 [2024-12-06 06:41:00.453314] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:41.819 [2024-12-06 06:41:00.453331] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.819 [2024-12-06 06:41:00.456120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.819 [2024-12-06 06:41:00.456166] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:41.819 pt1 00:14:41.819 06:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.819 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:41.819 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:41.819 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:41.819 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:41.819 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:41.819 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:41.819 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:41.819 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:41.819 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:41.819 06:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.819 06:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.078 malloc2 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.078 [2024-12-06 06:41:00.508847] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:42.078 [2024-12-06 06:41:00.508917] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.078 [2024-12-06 06:41:00.508956] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:42.078 [2024-12-06 06:41:00.508972] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.078 [2024-12-06 06:41:00.511762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.078 [2024-12-06 06:41:00.511810] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:42.078 pt2 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.078 malloc3 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.078 [2024-12-06 06:41:00.576705] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:42.078 [2024-12-06 06:41:00.576784] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.078 [2024-12-06 06:41:00.576820] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:42.078 [2024-12-06 06:41:00.576836] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.078 [2024-12-06 06:41:00.579685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.078 [2024-12-06 06:41:00.579732] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:42.078 pt3 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.078 [2024-12-06 06:41:00.588769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:42.078 [2024-12-06 06:41:00.591254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:42.078 [2024-12-06 06:41:00.591507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:42.078 [2024-12-06 06:41:00.591781] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:42.078 [2024-12-06 06:41:00.591809] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:42.078 [2024-12-06 06:41:00.592150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:42.078 [2024-12-06 06:41:00.592394] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:42.078 [2024-12-06 06:41:00.592415] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:42.078 [2024-12-06 06:41:00.592711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.078 06:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.079 06:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.079 06:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.079 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.079 "name": "raid_bdev1", 00:14:42.079 "uuid": "fe5e2bb9-feb6-4b51-96ce-f1402fd20fe8", 00:14:42.079 "strip_size_kb": 0, 00:14:42.079 "state": "online", 00:14:42.079 "raid_level": "raid1", 00:14:42.079 "superblock": true, 00:14:42.079 "num_base_bdevs": 3, 00:14:42.079 "num_base_bdevs_discovered": 3, 00:14:42.079 "num_base_bdevs_operational": 3, 00:14:42.079 "base_bdevs_list": [ 00:14:42.079 { 00:14:42.079 "name": "pt1", 00:14:42.079 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:42.079 "is_configured": true, 00:14:42.079 "data_offset": 2048, 00:14:42.079 "data_size": 63488 00:14:42.079 }, 00:14:42.079 { 00:14:42.079 "name": "pt2", 00:14:42.079 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:42.079 "is_configured": true, 00:14:42.079 "data_offset": 2048, 00:14:42.079 "data_size": 63488 00:14:42.079 }, 00:14:42.079 { 00:14:42.079 "name": "pt3", 00:14:42.079 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:42.079 "is_configured": true, 00:14:42.079 "data_offset": 2048, 00:14:42.079 "data_size": 63488 00:14:42.079 } 00:14:42.079 ] 00:14:42.079 }' 00:14:42.079 06:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.079 06:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.643 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:42.643 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:42.643 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:42.643 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:42.643 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:42.643 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:42.643 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:42.643 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.643 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.643 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:42.643 [2024-12-06 06:41:01.121311] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:42.643 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.643 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:42.643 "name": "raid_bdev1", 00:14:42.643 "aliases": [ 00:14:42.643 "fe5e2bb9-feb6-4b51-96ce-f1402fd20fe8" 00:14:42.643 ], 00:14:42.643 "product_name": "Raid Volume", 00:14:42.643 "block_size": 512, 00:14:42.643 "num_blocks": 63488, 00:14:42.643 "uuid": "fe5e2bb9-feb6-4b51-96ce-f1402fd20fe8", 00:14:42.643 "assigned_rate_limits": { 00:14:42.643 "rw_ios_per_sec": 0, 00:14:42.643 "rw_mbytes_per_sec": 0, 00:14:42.643 "r_mbytes_per_sec": 0, 00:14:42.643 "w_mbytes_per_sec": 0 00:14:42.643 }, 00:14:42.643 "claimed": false, 00:14:42.643 "zoned": false, 00:14:42.643 "supported_io_types": { 00:14:42.643 "read": true, 00:14:42.643 "write": true, 00:14:42.643 "unmap": false, 00:14:42.643 "flush": false, 00:14:42.643 "reset": true, 00:14:42.643 "nvme_admin": false, 00:14:42.643 "nvme_io": false, 00:14:42.643 "nvme_io_md": false, 00:14:42.643 "write_zeroes": true, 00:14:42.643 "zcopy": false, 00:14:42.643 "get_zone_info": false, 00:14:42.643 "zone_management": false, 00:14:42.643 "zone_append": false, 00:14:42.643 "compare": false, 00:14:42.643 "compare_and_write": false, 00:14:42.643 "abort": false, 00:14:42.643 "seek_hole": false, 00:14:42.643 "seek_data": false, 00:14:42.643 "copy": false, 00:14:42.643 "nvme_iov_md": false 00:14:42.643 }, 00:14:42.643 "memory_domains": [ 00:14:42.643 { 00:14:42.643 "dma_device_id": "system", 00:14:42.643 "dma_device_type": 1 00:14:42.643 }, 00:14:42.643 { 00:14:42.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.643 "dma_device_type": 2 00:14:42.643 }, 00:14:42.643 { 00:14:42.643 "dma_device_id": "system", 00:14:42.643 "dma_device_type": 1 00:14:42.643 }, 00:14:42.643 { 00:14:42.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.643 "dma_device_type": 2 00:14:42.643 }, 00:14:42.643 { 00:14:42.643 "dma_device_id": "system", 00:14:42.643 "dma_device_type": 1 00:14:42.643 }, 00:14:42.643 { 00:14:42.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.644 "dma_device_type": 2 00:14:42.644 } 00:14:42.644 ], 00:14:42.644 "driver_specific": { 00:14:42.644 "raid": { 00:14:42.644 "uuid": "fe5e2bb9-feb6-4b51-96ce-f1402fd20fe8", 00:14:42.644 "strip_size_kb": 0, 00:14:42.644 "state": "online", 00:14:42.644 "raid_level": "raid1", 00:14:42.644 "superblock": true, 00:14:42.644 "num_base_bdevs": 3, 00:14:42.644 "num_base_bdevs_discovered": 3, 00:14:42.644 "num_base_bdevs_operational": 3, 00:14:42.644 "base_bdevs_list": [ 00:14:42.644 { 00:14:42.644 "name": "pt1", 00:14:42.644 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:42.644 "is_configured": true, 00:14:42.644 "data_offset": 2048, 00:14:42.644 "data_size": 63488 00:14:42.644 }, 00:14:42.644 { 00:14:42.644 "name": "pt2", 00:14:42.644 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:42.644 "is_configured": true, 00:14:42.644 "data_offset": 2048, 00:14:42.644 "data_size": 63488 00:14:42.644 }, 00:14:42.644 { 00:14:42.644 "name": "pt3", 00:14:42.644 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:42.644 "is_configured": true, 00:14:42.644 "data_offset": 2048, 00:14:42.644 "data_size": 63488 00:14:42.644 } 00:14:42.644 ] 00:14:42.644 } 00:14:42.644 } 00:14:42.644 }' 00:14:42.644 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:42.644 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:42.644 pt2 00:14:42.644 pt3' 00:14:42.644 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:42.644 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:42.644 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:42.644 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:42.644 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.644 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.644 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:42.644 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.904 [2024-12-06 06:41:01.409384] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fe5e2bb9-feb6-4b51-96ce-f1402fd20fe8 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z fe5e2bb9-feb6-4b51-96ce-f1402fd20fe8 ']' 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.904 [2024-12-06 06:41:01.456998] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:42.904 [2024-12-06 06:41:01.457154] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:42.904 [2024-12-06 06:41:01.457292] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:42.904 [2024-12-06 06:41:01.457397] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:42.904 [2024-12-06 06:41:01.457415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.904 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:42.905 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:42.905 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:42.905 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:42.905 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.905 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.905 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.905 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:42.905 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:42.905 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.905 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.905 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.905 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:42.905 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:42.905 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.905 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.905 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.172 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:43.172 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:43.172 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.172 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.172 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.172 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:43.172 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:43.172 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:43.172 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:43.172 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:43.172 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:43.172 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:43.172 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:43.172 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:43.172 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.172 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.172 [2024-12-06 06:41:01.613129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:43.172 [2024-12-06 06:41:01.615608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:43.172 [2024-12-06 06:41:01.615827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:43.172 [2024-12-06 06:41:01.615913] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:43.172 [2024-12-06 06:41:01.616002] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:43.172 [2024-12-06 06:41:01.616037] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:43.172 [2024-12-06 06:41:01.616064] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:43.172 [2024-12-06 06:41:01.616079] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:43.172 request: 00:14:43.172 { 00:14:43.172 "name": "raid_bdev1", 00:14:43.172 "raid_level": "raid1", 00:14:43.172 "base_bdevs": [ 00:14:43.172 "malloc1", 00:14:43.172 "malloc2", 00:14:43.172 "malloc3" 00:14:43.172 ], 00:14:43.172 "superblock": false, 00:14:43.172 "method": "bdev_raid_create", 00:14:43.172 "req_id": 1 00:14:43.172 } 00:14:43.172 Got JSON-RPC error response 00:14:43.172 response: 00:14:43.172 { 00:14:43.172 "code": -17, 00:14:43.172 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:43.172 } 00:14:43.172 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:43.172 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:43.172 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:43.172 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:43.172 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:43.172 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.172 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.172 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:43.172 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.172 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.172 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:43.172 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:43.172 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:43.172 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.173 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.173 [2024-12-06 06:41:01.681058] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:43.173 [2024-12-06 06:41:01.681272] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.173 [2024-12-06 06:41:01.681317] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:43.173 [2024-12-06 06:41:01.681334] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.173 [2024-12-06 06:41:01.684221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.173 [2024-12-06 06:41:01.684268] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:43.173 [2024-12-06 06:41:01.684384] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:43.173 [2024-12-06 06:41:01.684452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:43.173 pt1 00:14:43.173 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.173 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:43.173 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.173 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.173 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.173 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.173 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:43.173 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.173 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.173 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.173 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.173 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.173 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.173 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.173 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.173 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.173 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.173 "name": "raid_bdev1", 00:14:43.173 "uuid": "fe5e2bb9-feb6-4b51-96ce-f1402fd20fe8", 00:14:43.173 "strip_size_kb": 0, 00:14:43.173 "state": "configuring", 00:14:43.173 "raid_level": "raid1", 00:14:43.173 "superblock": true, 00:14:43.173 "num_base_bdevs": 3, 00:14:43.173 "num_base_bdevs_discovered": 1, 00:14:43.173 "num_base_bdevs_operational": 3, 00:14:43.173 "base_bdevs_list": [ 00:14:43.173 { 00:14:43.173 "name": "pt1", 00:14:43.173 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:43.173 "is_configured": true, 00:14:43.173 "data_offset": 2048, 00:14:43.173 "data_size": 63488 00:14:43.173 }, 00:14:43.173 { 00:14:43.173 "name": null, 00:14:43.173 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:43.173 "is_configured": false, 00:14:43.173 "data_offset": 2048, 00:14:43.173 "data_size": 63488 00:14:43.173 }, 00:14:43.173 { 00:14:43.173 "name": null, 00:14:43.173 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:43.173 "is_configured": false, 00:14:43.173 "data_offset": 2048, 00:14:43.173 "data_size": 63488 00:14:43.173 } 00:14:43.173 ] 00:14:43.173 }' 00:14:43.173 06:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.173 06:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.738 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:43.738 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:43.738 06:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.738 06:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.738 [2024-12-06 06:41:02.193976] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:43.738 [2024-12-06 06:41:02.194055] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.738 [2024-12-06 06:41:02.194090] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:43.738 [2024-12-06 06:41:02.194104] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.738 [2024-12-06 06:41:02.194672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.738 [2024-12-06 06:41:02.194697] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:43.738 [2024-12-06 06:41:02.194810] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:43.738 [2024-12-06 06:41:02.194843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:43.738 pt2 00:14:43.738 06:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.738 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:43.738 06:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.738 06:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.738 [2024-12-06 06:41:02.205956] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:43.738 06:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.738 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:43.738 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.738 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.738 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.738 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.738 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:43.738 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.738 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.738 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.738 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.738 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.739 06:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.739 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.739 06:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.739 06:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.739 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.739 "name": "raid_bdev1", 00:14:43.739 "uuid": "fe5e2bb9-feb6-4b51-96ce-f1402fd20fe8", 00:14:43.739 "strip_size_kb": 0, 00:14:43.739 "state": "configuring", 00:14:43.739 "raid_level": "raid1", 00:14:43.739 "superblock": true, 00:14:43.739 "num_base_bdevs": 3, 00:14:43.739 "num_base_bdevs_discovered": 1, 00:14:43.739 "num_base_bdevs_operational": 3, 00:14:43.739 "base_bdevs_list": [ 00:14:43.739 { 00:14:43.739 "name": "pt1", 00:14:43.739 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:43.739 "is_configured": true, 00:14:43.739 "data_offset": 2048, 00:14:43.739 "data_size": 63488 00:14:43.739 }, 00:14:43.739 { 00:14:43.739 "name": null, 00:14:43.739 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:43.739 "is_configured": false, 00:14:43.739 "data_offset": 0, 00:14:43.739 "data_size": 63488 00:14:43.739 }, 00:14:43.739 { 00:14:43.739 "name": null, 00:14:43.739 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:43.739 "is_configured": false, 00:14:43.739 "data_offset": 2048, 00:14:43.739 "data_size": 63488 00:14:43.739 } 00:14:43.739 ] 00:14:43.739 }' 00:14:43.739 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.739 06:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.304 [2024-12-06 06:41:02.718107] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:44.304 [2024-12-06 06:41:02.718338] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.304 [2024-12-06 06:41:02.718377] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:44.304 [2024-12-06 06:41:02.718396] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.304 [2024-12-06 06:41:02.718991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.304 [2024-12-06 06:41:02.719022] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:44.304 [2024-12-06 06:41:02.719127] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:44.304 [2024-12-06 06:41:02.719176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:44.304 pt2 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.304 [2024-12-06 06:41:02.726069] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:44.304 [2024-12-06 06:41:02.726127] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.304 [2024-12-06 06:41:02.726150] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:44.304 [2024-12-06 06:41:02.726165] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.304 [2024-12-06 06:41:02.726632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.304 [2024-12-06 06:41:02.726680] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:44.304 [2024-12-06 06:41:02.726757] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:44.304 [2024-12-06 06:41:02.726789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:44.304 [2024-12-06 06:41:02.726949] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:44.304 [2024-12-06 06:41:02.726973] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:44.304 [2024-12-06 06:41:02.727282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:44.304 [2024-12-06 06:41:02.727480] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:44.304 [2024-12-06 06:41:02.727495] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:44.304 [2024-12-06 06:41:02.727686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.304 pt3 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.304 "name": "raid_bdev1", 00:14:44.304 "uuid": "fe5e2bb9-feb6-4b51-96ce-f1402fd20fe8", 00:14:44.304 "strip_size_kb": 0, 00:14:44.304 "state": "online", 00:14:44.304 "raid_level": "raid1", 00:14:44.304 "superblock": true, 00:14:44.304 "num_base_bdevs": 3, 00:14:44.304 "num_base_bdevs_discovered": 3, 00:14:44.304 "num_base_bdevs_operational": 3, 00:14:44.304 "base_bdevs_list": [ 00:14:44.304 { 00:14:44.304 "name": "pt1", 00:14:44.304 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:44.304 "is_configured": true, 00:14:44.304 "data_offset": 2048, 00:14:44.304 "data_size": 63488 00:14:44.304 }, 00:14:44.304 { 00:14:44.304 "name": "pt2", 00:14:44.304 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:44.304 "is_configured": true, 00:14:44.304 "data_offset": 2048, 00:14:44.304 "data_size": 63488 00:14:44.304 }, 00:14:44.304 { 00:14:44.304 "name": "pt3", 00:14:44.304 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:44.304 "is_configured": true, 00:14:44.304 "data_offset": 2048, 00:14:44.304 "data_size": 63488 00:14:44.304 } 00:14:44.304 ] 00:14:44.304 }' 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.304 06:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.870 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:44.870 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:44.870 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:44.870 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:44.870 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:44.870 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:44.870 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:44.870 06:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.870 06:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.870 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:44.870 [2024-12-06 06:41:03.246649] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:44.870 06:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.870 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:44.870 "name": "raid_bdev1", 00:14:44.870 "aliases": [ 00:14:44.870 "fe5e2bb9-feb6-4b51-96ce-f1402fd20fe8" 00:14:44.870 ], 00:14:44.870 "product_name": "Raid Volume", 00:14:44.870 "block_size": 512, 00:14:44.870 "num_blocks": 63488, 00:14:44.870 "uuid": "fe5e2bb9-feb6-4b51-96ce-f1402fd20fe8", 00:14:44.870 "assigned_rate_limits": { 00:14:44.870 "rw_ios_per_sec": 0, 00:14:44.870 "rw_mbytes_per_sec": 0, 00:14:44.870 "r_mbytes_per_sec": 0, 00:14:44.870 "w_mbytes_per_sec": 0 00:14:44.870 }, 00:14:44.870 "claimed": false, 00:14:44.870 "zoned": false, 00:14:44.870 "supported_io_types": { 00:14:44.870 "read": true, 00:14:44.870 "write": true, 00:14:44.870 "unmap": false, 00:14:44.870 "flush": false, 00:14:44.870 "reset": true, 00:14:44.870 "nvme_admin": false, 00:14:44.870 "nvme_io": false, 00:14:44.870 "nvme_io_md": false, 00:14:44.870 "write_zeroes": true, 00:14:44.870 "zcopy": false, 00:14:44.870 "get_zone_info": false, 00:14:44.870 "zone_management": false, 00:14:44.870 "zone_append": false, 00:14:44.870 "compare": false, 00:14:44.870 "compare_and_write": false, 00:14:44.870 "abort": false, 00:14:44.870 "seek_hole": false, 00:14:44.870 "seek_data": false, 00:14:44.870 "copy": false, 00:14:44.870 "nvme_iov_md": false 00:14:44.870 }, 00:14:44.870 "memory_domains": [ 00:14:44.870 { 00:14:44.870 "dma_device_id": "system", 00:14:44.870 "dma_device_type": 1 00:14:44.870 }, 00:14:44.870 { 00:14:44.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.870 "dma_device_type": 2 00:14:44.870 }, 00:14:44.870 { 00:14:44.870 "dma_device_id": "system", 00:14:44.870 "dma_device_type": 1 00:14:44.870 }, 00:14:44.870 { 00:14:44.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.870 "dma_device_type": 2 00:14:44.870 }, 00:14:44.870 { 00:14:44.870 "dma_device_id": "system", 00:14:44.870 "dma_device_type": 1 00:14:44.870 }, 00:14:44.870 { 00:14:44.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.870 "dma_device_type": 2 00:14:44.870 } 00:14:44.870 ], 00:14:44.870 "driver_specific": { 00:14:44.870 "raid": { 00:14:44.870 "uuid": "fe5e2bb9-feb6-4b51-96ce-f1402fd20fe8", 00:14:44.870 "strip_size_kb": 0, 00:14:44.871 "state": "online", 00:14:44.871 "raid_level": "raid1", 00:14:44.871 "superblock": true, 00:14:44.871 "num_base_bdevs": 3, 00:14:44.871 "num_base_bdevs_discovered": 3, 00:14:44.871 "num_base_bdevs_operational": 3, 00:14:44.871 "base_bdevs_list": [ 00:14:44.871 { 00:14:44.871 "name": "pt1", 00:14:44.871 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:44.871 "is_configured": true, 00:14:44.871 "data_offset": 2048, 00:14:44.871 "data_size": 63488 00:14:44.871 }, 00:14:44.871 { 00:14:44.871 "name": "pt2", 00:14:44.871 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:44.871 "is_configured": true, 00:14:44.871 "data_offset": 2048, 00:14:44.871 "data_size": 63488 00:14:44.871 }, 00:14:44.871 { 00:14:44.871 "name": "pt3", 00:14:44.871 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:44.871 "is_configured": true, 00:14:44.871 "data_offset": 2048, 00:14:44.871 "data_size": 63488 00:14:44.871 } 00:14:44.871 ] 00:14:44.871 } 00:14:44.871 } 00:14:44.871 }' 00:14:44.871 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:44.871 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:44.871 pt2 00:14:44.871 pt3' 00:14:44.871 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.871 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:44.871 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.871 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.871 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:44.871 06:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.871 06:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.871 06:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.871 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:44.871 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:44.871 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.871 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.871 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:44.871 06:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.871 06:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.871 06:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.129 [2024-12-06 06:41:03.574648] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' fe5e2bb9-feb6-4b51-96ce-f1402fd20fe8 '!=' fe5e2bb9-feb6-4b51-96ce-f1402fd20fe8 ']' 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.129 [2024-12-06 06:41:03.610402] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.129 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.129 "name": "raid_bdev1", 00:14:45.129 "uuid": "fe5e2bb9-feb6-4b51-96ce-f1402fd20fe8", 00:14:45.129 "strip_size_kb": 0, 00:14:45.129 "state": "online", 00:14:45.129 "raid_level": "raid1", 00:14:45.129 "superblock": true, 00:14:45.129 "num_base_bdevs": 3, 00:14:45.129 "num_base_bdevs_discovered": 2, 00:14:45.129 "num_base_bdevs_operational": 2, 00:14:45.129 "base_bdevs_list": [ 00:14:45.129 { 00:14:45.129 "name": null, 00:14:45.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.129 "is_configured": false, 00:14:45.129 "data_offset": 0, 00:14:45.129 "data_size": 63488 00:14:45.130 }, 00:14:45.130 { 00:14:45.130 "name": "pt2", 00:14:45.130 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:45.130 "is_configured": true, 00:14:45.130 "data_offset": 2048, 00:14:45.130 "data_size": 63488 00:14:45.130 }, 00:14:45.130 { 00:14:45.130 "name": "pt3", 00:14:45.130 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:45.130 "is_configured": true, 00:14:45.130 "data_offset": 2048, 00:14:45.130 "data_size": 63488 00:14:45.130 } 00:14:45.130 ] 00:14:45.130 }' 00:14:45.130 06:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.130 06:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.696 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:45.696 06:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.696 06:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.696 [2024-12-06 06:41:04.134470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:45.696 [2024-12-06 06:41:04.134509] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:45.696 [2024-12-06 06:41:04.134623] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.696 [2024-12-06 06:41:04.134706] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:45.696 [2024-12-06 06:41:04.134730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:45.696 06:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.696 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.696 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:45.696 06:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.696 06:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.696 06:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.696 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:45.696 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:45.696 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:45.696 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:45.696 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:45.696 06:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.696 06:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.696 06:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.696 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:45.696 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:45.696 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:45.696 06:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.696 06:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.696 06:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.696 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:45.696 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:45.696 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:45.696 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:45.696 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:45.696 06:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.696 06:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.696 [2024-12-06 06:41:04.214440] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:45.696 [2024-12-06 06:41:04.214539] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.696 [2024-12-06 06:41:04.214566] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:45.696 [2024-12-06 06:41:04.214584] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.696 [2024-12-06 06:41:04.217466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.697 [2024-12-06 06:41:04.217519] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:45.697 [2024-12-06 06:41:04.217638] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:45.697 [2024-12-06 06:41:04.217705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:45.697 pt2 00:14:45.697 06:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.697 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:45.697 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.697 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.697 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.697 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.697 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:45.697 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.697 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.697 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.697 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.697 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.697 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.697 06:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.697 06:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.697 06:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.697 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.697 "name": "raid_bdev1", 00:14:45.697 "uuid": "fe5e2bb9-feb6-4b51-96ce-f1402fd20fe8", 00:14:45.697 "strip_size_kb": 0, 00:14:45.697 "state": "configuring", 00:14:45.697 "raid_level": "raid1", 00:14:45.697 "superblock": true, 00:14:45.697 "num_base_bdevs": 3, 00:14:45.697 "num_base_bdevs_discovered": 1, 00:14:45.697 "num_base_bdevs_operational": 2, 00:14:45.697 "base_bdevs_list": [ 00:14:45.697 { 00:14:45.697 "name": null, 00:14:45.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.697 "is_configured": false, 00:14:45.697 "data_offset": 2048, 00:14:45.697 "data_size": 63488 00:14:45.697 }, 00:14:45.697 { 00:14:45.697 "name": "pt2", 00:14:45.697 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:45.697 "is_configured": true, 00:14:45.697 "data_offset": 2048, 00:14:45.697 "data_size": 63488 00:14:45.697 }, 00:14:45.697 { 00:14:45.697 "name": null, 00:14:45.697 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:45.697 "is_configured": false, 00:14:45.697 "data_offset": 2048, 00:14:45.697 "data_size": 63488 00:14:45.697 } 00:14:45.697 ] 00:14:45.697 }' 00:14:45.697 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.697 06:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.264 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:46.264 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:46.264 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:14:46.264 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:46.264 06:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.264 06:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.264 [2024-12-06 06:41:04.710634] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:46.264 [2024-12-06 06:41:04.710718] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:46.264 [2024-12-06 06:41:04.710749] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:46.264 [2024-12-06 06:41:04.710768] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:46.264 [2024-12-06 06:41:04.711338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:46.264 [2024-12-06 06:41:04.711386] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:46.264 [2024-12-06 06:41:04.711499] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:46.264 [2024-12-06 06:41:04.711557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:46.264 [2024-12-06 06:41:04.711716] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:46.264 [2024-12-06 06:41:04.711748] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:46.264 [2024-12-06 06:41:04.712070] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:46.264 [2024-12-06 06:41:04.712287] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:46.264 [2024-12-06 06:41:04.712312] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:46.264 [2024-12-06 06:41:04.712487] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.264 pt3 00:14:46.264 06:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.264 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:46.264 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.264 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.264 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.264 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.264 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:46.264 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.264 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.264 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.264 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.264 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.264 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.264 06:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.264 06:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.264 06:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.264 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.264 "name": "raid_bdev1", 00:14:46.264 "uuid": "fe5e2bb9-feb6-4b51-96ce-f1402fd20fe8", 00:14:46.264 "strip_size_kb": 0, 00:14:46.264 "state": "online", 00:14:46.264 "raid_level": "raid1", 00:14:46.264 "superblock": true, 00:14:46.264 "num_base_bdevs": 3, 00:14:46.264 "num_base_bdevs_discovered": 2, 00:14:46.264 "num_base_bdevs_operational": 2, 00:14:46.264 "base_bdevs_list": [ 00:14:46.264 { 00:14:46.264 "name": null, 00:14:46.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.264 "is_configured": false, 00:14:46.264 "data_offset": 2048, 00:14:46.264 "data_size": 63488 00:14:46.264 }, 00:14:46.264 { 00:14:46.264 "name": "pt2", 00:14:46.264 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:46.264 "is_configured": true, 00:14:46.264 "data_offset": 2048, 00:14:46.264 "data_size": 63488 00:14:46.264 }, 00:14:46.264 { 00:14:46.264 "name": "pt3", 00:14:46.264 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:46.264 "is_configured": true, 00:14:46.264 "data_offset": 2048, 00:14:46.264 "data_size": 63488 00:14:46.264 } 00:14:46.264 ] 00:14:46.264 }' 00:14:46.264 06:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.264 06:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.836 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:46.836 06:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.836 06:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.836 [2024-12-06 06:41:05.226835] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:46.836 [2024-12-06 06:41:05.226876] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:46.836 [2024-12-06 06:41:05.226977] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:46.836 [2024-12-06 06:41:05.227064] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:46.836 [2024-12-06 06:41:05.227081] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:46.836 06:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.836 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.836 06:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.836 06:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.836 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:46.836 06:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.836 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:46.836 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:46.836 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:14:46.836 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:14:46.836 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:14:46.836 06:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.836 06:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.836 06:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.836 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:46.836 06:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.836 06:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.836 [2024-12-06 06:41:05.294860] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:46.836 [2024-12-06 06:41:05.294929] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:46.836 [2024-12-06 06:41:05.294958] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:46.836 [2024-12-06 06:41:05.294973] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:46.836 [2024-12-06 06:41:05.297871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:46.836 [2024-12-06 06:41:05.297917] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:46.836 [2024-12-06 06:41:05.298021] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:46.836 [2024-12-06 06:41:05.298087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:46.836 [2024-12-06 06:41:05.298256] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:46.836 [2024-12-06 06:41:05.298274] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:46.837 [2024-12-06 06:41:05.298297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:46.837 [2024-12-06 06:41:05.298367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:46.837 pt1 00:14:46.837 06:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.837 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:14:46.837 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:46.837 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.837 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.837 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.837 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.837 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:46.837 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.837 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.837 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.837 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.837 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.837 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.837 06:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.837 06:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.837 06:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.837 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.837 "name": "raid_bdev1", 00:14:46.837 "uuid": "fe5e2bb9-feb6-4b51-96ce-f1402fd20fe8", 00:14:46.837 "strip_size_kb": 0, 00:14:46.837 "state": "configuring", 00:14:46.837 "raid_level": "raid1", 00:14:46.837 "superblock": true, 00:14:46.837 "num_base_bdevs": 3, 00:14:46.837 "num_base_bdevs_discovered": 1, 00:14:46.837 "num_base_bdevs_operational": 2, 00:14:46.837 "base_bdevs_list": [ 00:14:46.837 { 00:14:46.837 "name": null, 00:14:46.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.837 "is_configured": false, 00:14:46.837 "data_offset": 2048, 00:14:46.837 "data_size": 63488 00:14:46.837 }, 00:14:46.837 { 00:14:46.837 "name": "pt2", 00:14:46.837 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:46.837 "is_configured": true, 00:14:46.837 "data_offset": 2048, 00:14:46.837 "data_size": 63488 00:14:46.837 }, 00:14:46.837 { 00:14:46.837 "name": null, 00:14:46.837 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:46.837 "is_configured": false, 00:14:46.837 "data_offset": 2048, 00:14:46.837 "data_size": 63488 00:14:46.837 } 00:14:46.837 ] 00:14:46.837 }' 00:14:46.837 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.837 06:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.403 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:47.403 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:47.403 06:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.403 06:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.403 06:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.403 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:47.403 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:47.403 06:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.403 06:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.403 [2024-12-06 06:41:05.867025] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:47.403 [2024-12-06 06:41:05.867111] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.403 [2024-12-06 06:41:05.867146] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:47.403 [2024-12-06 06:41:05.867161] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.403 [2024-12-06 06:41:05.867780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.403 [2024-12-06 06:41:05.867822] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:47.403 [2024-12-06 06:41:05.867936] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:47.403 [2024-12-06 06:41:05.867968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:47.403 [2024-12-06 06:41:05.868122] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:47.403 [2024-12-06 06:41:05.868137] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:47.403 [2024-12-06 06:41:05.868445] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:47.403 [2024-12-06 06:41:05.868675] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:47.403 [2024-12-06 06:41:05.868701] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:47.403 [2024-12-06 06:41:05.868872] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.403 pt3 00:14:47.403 06:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.403 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:47.403 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.403 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.403 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:47.403 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:47.403 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:47.403 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.403 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.403 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.403 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.403 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.403 06:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.403 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.403 06:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.403 06:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.403 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.403 "name": "raid_bdev1", 00:14:47.403 "uuid": "fe5e2bb9-feb6-4b51-96ce-f1402fd20fe8", 00:14:47.403 "strip_size_kb": 0, 00:14:47.403 "state": "online", 00:14:47.403 "raid_level": "raid1", 00:14:47.403 "superblock": true, 00:14:47.403 "num_base_bdevs": 3, 00:14:47.403 "num_base_bdevs_discovered": 2, 00:14:47.403 "num_base_bdevs_operational": 2, 00:14:47.403 "base_bdevs_list": [ 00:14:47.403 { 00:14:47.403 "name": null, 00:14:47.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.403 "is_configured": false, 00:14:47.403 "data_offset": 2048, 00:14:47.403 "data_size": 63488 00:14:47.403 }, 00:14:47.403 { 00:14:47.403 "name": "pt2", 00:14:47.403 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:47.403 "is_configured": true, 00:14:47.403 "data_offset": 2048, 00:14:47.403 "data_size": 63488 00:14:47.403 }, 00:14:47.403 { 00:14:47.403 "name": "pt3", 00:14:47.403 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:47.403 "is_configured": true, 00:14:47.403 "data_offset": 2048, 00:14:47.403 "data_size": 63488 00:14:47.403 } 00:14:47.403 ] 00:14:47.403 }' 00:14:47.403 06:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.403 06:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.055 06:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:48.055 06:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.055 06:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.055 06:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:48.055 06:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.055 06:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:48.055 06:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:48.055 06:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:48.055 06:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.055 06:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.055 [2024-12-06 06:41:06.451549] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:48.055 06:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.055 06:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' fe5e2bb9-feb6-4b51-96ce-f1402fd20fe8 '!=' fe5e2bb9-feb6-4b51-96ce-f1402fd20fe8 ']' 00:14:48.055 06:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68909 00:14:48.055 06:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68909 ']' 00:14:48.055 06:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68909 00:14:48.055 06:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:48.055 06:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:48.055 06:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68909 00:14:48.055 06:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:48.055 06:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:48.055 06:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68909' 00:14:48.055 killing process with pid 68909 00:14:48.055 06:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68909 00:14:48.055 [2024-12-06 06:41:06.525604] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:48.055 06:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68909 00:14:48.055 [2024-12-06 06:41:06.525722] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.055 [2024-12-06 06:41:06.525818] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:48.055 [2024-12-06 06:41:06.525838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:48.342 [2024-12-06 06:41:06.798576] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:49.276 06:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:49.276 00:14:49.276 real 0m8.577s 00:14:49.276 user 0m14.094s 00:14:49.276 sys 0m1.169s 00:14:49.276 ************************************ 00:14:49.276 END TEST raid_superblock_test 00:14:49.276 ************************************ 00:14:49.276 06:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:49.276 06:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.276 06:41:07 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:14:49.276 06:41:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:49.276 06:41:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:49.276 06:41:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:49.276 ************************************ 00:14:49.276 START TEST raid_read_error_test 00:14:49.276 ************************************ 00:14:49.276 06:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:14:49.276 06:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:49.276 06:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:14:49.276 06:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:49.276 06:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:49.276 06:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:49.276 06:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:49.276 06:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:49.276 06:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:49.276 06:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:49.276 06:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:49.276 06:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:49.276 06:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:49.276 06:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:49.276 06:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:49.276 06:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:49.276 06:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:49.276 06:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:49.276 06:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:49.276 06:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:49.277 06:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:49.277 06:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:49.277 06:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:49.277 06:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:49.277 06:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:49.277 06:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YF8c71T5hj 00:14:49.277 06:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69360 00:14:49.277 06:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:49.277 06:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69360 00:14:49.277 06:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69360 ']' 00:14:49.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.277 06:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.277 06:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:49.277 06:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.277 06:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:49.277 06:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.536 [2024-12-06 06:41:07.987394] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:14:49.536 [2024-12-06 06:41:07.988896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69360 ] 00:14:49.536 [2024-12-06 06:41:08.167801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.794 [2024-12-06 06:41:08.321393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.054 [2024-12-06 06:41:08.532040] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:50.054 [2024-12-06 06:41:08.532126] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:50.313 06:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:50.313 06:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:50.313 06:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:50.313 06:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:50.313 06:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.313 06:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.313 BaseBdev1_malloc 00:14:50.314 06:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.314 06:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:50.314 06:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.314 06:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.572 true 00:14:50.573 06:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.573 06:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:50.573 06:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.573 06:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.573 [2024-12-06 06:41:08.966770] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:50.573 [2024-12-06 06:41:08.966839] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.573 [2024-12-06 06:41:08.966870] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:50.573 [2024-12-06 06:41:08.966890] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.573 [2024-12-06 06:41:08.969718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.573 [2024-12-06 06:41:08.969906] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:50.573 BaseBdev1 00:14:50.573 06:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.573 06:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:50.573 06:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:50.573 06:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.573 06:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.573 BaseBdev2_malloc 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.573 true 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.573 [2024-12-06 06:41:09.023208] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:50.573 [2024-12-06 06:41:09.023278] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.573 [2024-12-06 06:41:09.023303] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:50.573 [2024-12-06 06:41:09.023321] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.573 [2024-12-06 06:41:09.026095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.573 [2024-12-06 06:41:09.026147] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:50.573 BaseBdev2 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.573 BaseBdev3_malloc 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.573 true 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.573 [2024-12-06 06:41:09.102418] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:50.573 [2024-12-06 06:41:09.102755] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.573 [2024-12-06 06:41:09.102802] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:50.573 [2024-12-06 06:41:09.102823] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.573 [2024-12-06 06:41:09.105917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.573 [2024-12-06 06:41:09.106084] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:50.573 BaseBdev3 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.573 [2024-12-06 06:41:09.114538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:50.573 [2024-12-06 06:41:09.117089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:50.573 [2024-12-06 06:41:09.117346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:50.573 [2024-12-06 06:41:09.117702] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:50.573 [2024-12-06 06:41:09.117723] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:50.573 [2024-12-06 06:41:09.118081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:14:50.573 [2024-12-06 06:41:09.118327] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:50.573 [2024-12-06 06:41:09.118348] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:50.573 [2024-12-06 06:41:09.118626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.573 "name": "raid_bdev1", 00:14:50.573 "uuid": "48207845-3069-4c94-8100-45d8ac8aadc4", 00:14:50.573 "strip_size_kb": 0, 00:14:50.573 "state": "online", 00:14:50.573 "raid_level": "raid1", 00:14:50.573 "superblock": true, 00:14:50.573 "num_base_bdevs": 3, 00:14:50.573 "num_base_bdevs_discovered": 3, 00:14:50.573 "num_base_bdevs_operational": 3, 00:14:50.573 "base_bdevs_list": [ 00:14:50.573 { 00:14:50.573 "name": "BaseBdev1", 00:14:50.573 "uuid": "9d62dd3b-89bc-5d02-842a-2d94f994f083", 00:14:50.573 "is_configured": true, 00:14:50.573 "data_offset": 2048, 00:14:50.573 "data_size": 63488 00:14:50.573 }, 00:14:50.573 { 00:14:50.573 "name": "BaseBdev2", 00:14:50.573 "uuid": "8fc09c5c-6b58-56bd-b7f3-19a1e78b4eab", 00:14:50.573 "is_configured": true, 00:14:50.573 "data_offset": 2048, 00:14:50.573 "data_size": 63488 00:14:50.573 }, 00:14:50.573 { 00:14:50.573 "name": "BaseBdev3", 00:14:50.573 "uuid": "cb333d8c-9252-5df7-a525-f64956ca31ea", 00:14:50.573 "is_configured": true, 00:14:50.573 "data_offset": 2048, 00:14:50.573 "data_size": 63488 00:14:50.573 } 00:14:50.573 ] 00:14:50.573 }' 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.573 06:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.141 06:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:51.141 06:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:51.141 [2024-12-06 06:41:09.756253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:14:52.078 06:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:52.078 06:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.078 06:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.078 06:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.078 06:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:52.078 06:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:52.078 06:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:14:52.078 06:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:14:52.078 06:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:52.078 06:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.078 06:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.078 06:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.078 06:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.078 06:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.078 06:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.078 06:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.078 06:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.078 06:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.078 06:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.078 06:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.078 06:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.078 06:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.078 06:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.078 06:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.078 "name": "raid_bdev1", 00:14:52.078 "uuid": "48207845-3069-4c94-8100-45d8ac8aadc4", 00:14:52.078 "strip_size_kb": 0, 00:14:52.078 "state": "online", 00:14:52.078 "raid_level": "raid1", 00:14:52.078 "superblock": true, 00:14:52.078 "num_base_bdevs": 3, 00:14:52.078 "num_base_bdevs_discovered": 3, 00:14:52.078 "num_base_bdevs_operational": 3, 00:14:52.078 "base_bdevs_list": [ 00:14:52.078 { 00:14:52.078 "name": "BaseBdev1", 00:14:52.078 "uuid": "9d62dd3b-89bc-5d02-842a-2d94f994f083", 00:14:52.078 "is_configured": true, 00:14:52.078 "data_offset": 2048, 00:14:52.078 "data_size": 63488 00:14:52.078 }, 00:14:52.078 { 00:14:52.078 "name": "BaseBdev2", 00:14:52.078 "uuid": "8fc09c5c-6b58-56bd-b7f3-19a1e78b4eab", 00:14:52.078 "is_configured": true, 00:14:52.078 "data_offset": 2048, 00:14:52.078 "data_size": 63488 00:14:52.078 }, 00:14:52.078 { 00:14:52.078 "name": "BaseBdev3", 00:14:52.078 "uuid": "cb333d8c-9252-5df7-a525-f64956ca31ea", 00:14:52.078 "is_configured": true, 00:14:52.078 "data_offset": 2048, 00:14:52.078 "data_size": 63488 00:14:52.078 } 00:14:52.078 ] 00:14:52.078 }' 00:14:52.078 06:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.078 06:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.646 06:41:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:52.646 06:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.646 06:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.646 [2024-12-06 06:41:11.138572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:52.646 [2024-12-06 06:41:11.138739] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:52.646 [2024-12-06 06:41:11.142270] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:52.646 { 00:14:52.646 "results": [ 00:14:52.646 { 00:14:52.646 "job": "raid_bdev1", 00:14:52.646 "core_mask": "0x1", 00:14:52.646 "workload": "randrw", 00:14:52.646 "percentage": 50, 00:14:52.646 "status": "finished", 00:14:52.646 "queue_depth": 1, 00:14:52.646 "io_size": 131072, 00:14:52.646 "runtime": 1.37996, 00:14:52.646 "iops": 8979.245775239862, 00:14:52.646 "mibps": 1122.4057219049828, 00:14:52.646 "io_failed": 0, 00:14:52.646 "io_timeout": 0, 00:14:52.646 "avg_latency_us": 106.81914175244496, 00:14:52.646 "min_latency_us": 42.82181818181818, 00:14:52.646 "max_latency_us": 1936.290909090909 00:14:52.646 } 00:14:52.646 ], 00:14:52.646 "core_count": 1 00:14:52.646 } 00:14:52.646 [2024-12-06 06:41:11.142460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.646 [2024-12-06 06:41:11.142693] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:52.646 [2024-12-06 06:41:11.142714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:52.646 06:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.647 06:41:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69360 00:14:52.647 06:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69360 ']' 00:14:52.647 06:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69360 00:14:52.647 06:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:14:52.647 06:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:52.647 06:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69360 00:14:52.647 killing process with pid 69360 00:14:52.647 06:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:52.647 06:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:52.647 06:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69360' 00:14:52.647 06:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69360 00:14:52.647 [2024-12-06 06:41:11.186640] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:52.647 06:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69360 00:14:52.905 [2024-12-06 06:41:11.395074] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:54.279 06:41:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YF8c71T5hj 00:14:54.279 06:41:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:54.279 06:41:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:54.279 06:41:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:54.279 06:41:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:54.279 06:41:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:54.279 06:41:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:54.279 06:41:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:54.279 00:14:54.279 real 0m4.652s 00:14:54.279 user 0m5.702s 00:14:54.279 sys 0m0.564s 00:14:54.279 06:41:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:54.279 ************************************ 00:14:54.279 END TEST raid_read_error_test 00:14:54.279 ************************************ 00:14:54.279 06:41:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.279 06:41:12 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:14:54.279 06:41:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:54.279 06:41:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:54.279 06:41:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:54.279 ************************************ 00:14:54.279 START TEST raid_write_error_test 00:14:54.279 ************************************ 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.94oI0UC82o 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69506 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69506 00:14:54.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69506 ']' 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:54.279 06:41:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.279 [2024-12-06 06:41:12.697230] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:14:54.280 [2024-12-06 06:41:12.697407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69506 ] 00:14:54.280 [2024-12-06 06:41:12.880026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.537 [2024-12-06 06:41:13.061195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.796 [2024-12-06 06:41:13.271907] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.796 [2024-12-06 06:41:13.271952] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.364 BaseBdev1_malloc 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.364 true 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.364 [2024-12-06 06:41:13.808130] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:55.364 [2024-12-06 06:41:13.808218] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.364 [2024-12-06 06:41:13.808264] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:55.364 [2024-12-06 06:41:13.808299] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.364 [2024-12-06 06:41:13.811180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.364 [2024-12-06 06:41:13.811232] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:55.364 BaseBdev1 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.364 BaseBdev2_malloc 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.364 true 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.364 [2024-12-06 06:41:13.873714] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:55.364 [2024-12-06 06:41:13.874053] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.364 [2024-12-06 06:41:13.874099] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:55.364 [2024-12-06 06:41:13.874120] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.364 [2024-12-06 06:41:13.877246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.364 [2024-12-06 06:41:13.877449] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:55.364 BaseBdev2 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.364 BaseBdev3_malloc 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.364 true 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.364 [2024-12-06 06:41:13.947160] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:55.364 [2024-12-06 06:41:13.947230] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.364 [2024-12-06 06:41:13.947259] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:55.364 [2024-12-06 06:41:13.947278] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.364 [2024-12-06 06:41:13.950158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.364 [2024-12-06 06:41:13.950351] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:55.364 BaseBdev3 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.364 [2024-12-06 06:41:13.955326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.364 [2024-12-06 06:41:13.957819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:55.364 [2024-12-06 06:41:13.957931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:55.364 [2024-12-06 06:41:13.958230] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:55.364 [2024-12-06 06:41:13.958251] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:55.364 [2024-12-06 06:41:13.958630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:14:55.364 [2024-12-06 06:41:13.958866] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:55.364 [2024-12-06 06:41:13.958892] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:55.364 [2024-12-06 06:41:13.959096] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.364 06:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.365 06:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.365 06:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.365 06:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.365 06:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.365 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.365 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.365 06:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.623 06:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.623 "name": "raid_bdev1", 00:14:55.623 "uuid": "d43aa579-9cad-4094-8466-50894a387932", 00:14:55.623 "strip_size_kb": 0, 00:14:55.623 "state": "online", 00:14:55.623 "raid_level": "raid1", 00:14:55.623 "superblock": true, 00:14:55.623 "num_base_bdevs": 3, 00:14:55.623 "num_base_bdevs_discovered": 3, 00:14:55.623 "num_base_bdevs_operational": 3, 00:14:55.623 "base_bdevs_list": [ 00:14:55.623 { 00:14:55.623 "name": "BaseBdev1", 00:14:55.623 "uuid": "3963afae-5fc4-5e3a-9569-cd1f8cf36549", 00:14:55.623 "is_configured": true, 00:14:55.623 "data_offset": 2048, 00:14:55.623 "data_size": 63488 00:14:55.623 }, 00:14:55.623 { 00:14:55.623 "name": "BaseBdev2", 00:14:55.623 "uuid": "15ac4f63-fdae-5372-9e3f-b031cef037e9", 00:14:55.623 "is_configured": true, 00:14:55.623 "data_offset": 2048, 00:14:55.623 "data_size": 63488 00:14:55.623 }, 00:14:55.623 { 00:14:55.623 "name": "BaseBdev3", 00:14:55.623 "uuid": "cec19f14-ef77-542e-9e34-09a739961df8", 00:14:55.623 "is_configured": true, 00:14:55.623 "data_offset": 2048, 00:14:55.623 "data_size": 63488 00:14:55.623 } 00:14:55.623 ] 00:14:55.623 }' 00:14:55.623 06:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.623 06:41:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.881 06:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:55.881 06:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:56.139 [2024-12-06 06:41:14.584891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:14:57.119 06:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:57.119 06:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.119 06:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.119 [2024-12-06 06:41:15.469849] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:14:57.119 [2024-12-06 06:41:15.470049] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:57.119 [2024-12-06 06:41:15.470336] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:14:57.119 06:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.119 06:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:57.119 06:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:57.119 06:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:14:57.119 06:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:14:57.119 06:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:57.119 06:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.119 06:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.119 06:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.119 06:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.119 06:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:57.119 06:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.119 06:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.119 06:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.119 06:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.119 06:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.119 06:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.119 06:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.119 06:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.119 06:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.119 06:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.119 "name": "raid_bdev1", 00:14:57.119 "uuid": "d43aa579-9cad-4094-8466-50894a387932", 00:14:57.120 "strip_size_kb": 0, 00:14:57.120 "state": "online", 00:14:57.120 "raid_level": "raid1", 00:14:57.120 "superblock": true, 00:14:57.120 "num_base_bdevs": 3, 00:14:57.120 "num_base_bdevs_discovered": 2, 00:14:57.120 "num_base_bdevs_operational": 2, 00:14:57.120 "base_bdevs_list": [ 00:14:57.120 { 00:14:57.120 "name": null, 00:14:57.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.120 "is_configured": false, 00:14:57.120 "data_offset": 0, 00:14:57.120 "data_size": 63488 00:14:57.120 }, 00:14:57.120 { 00:14:57.120 "name": "BaseBdev2", 00:14:57.120 "uuid": "15ac4f63-fdae-5372-9e3f-b031cef037e9", 00:14:57.120 "is_configured": true, 00:14:57.120 "data_offset": 2048, 00:14:57.120 "data_size": 63488 00:14:57.120 }, 00:14:57.120 { 00:14:57.120 "name": "BaseBdev3", 00:14:57.120 "uuid": "cec19f14-ef77-542e-9e34-09a739961df8", 00:14:57.120 "is_configured": true, 00:14:57.120 "data_offset": 2048, 00:14:57.120 "data_size": 63488 00:14:57.120 } 00:14:57.120 ] 00:14:57.120 }' 00:14:57.120 06:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.120 06:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.377 06:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:57.377 06:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.377 06:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.377 [2024-12-06 06:41:15.970580] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:57.377 [2024-12-06 06:41:15.970620] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:57.377 [2024-12-06 06:41:15.974002] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.377 { 00:14:57.377 "results": [ 00:14:57.377 { 00:14:57.377 "job": "raid_bdev1", 00:14:57.378 "core_mask": "0x1", 00:14:57.378 "workload": "randrw", 00:14:57.378 "percentage": 50, 00:14:57.378 "status": "finished", 00:14:57.378 "queue_depth": 1, 00:14:57.378 "io_size": 131072, 00:14:57.378 "runtime": 1.383305, 00:14:57.378 "iops": 10055.627645385508, 00:14:57.378 "mibps": 1256.9534556731885, 00:14:57.378 "io_failed": 0, 00:14:57.378 "io_timeout": 0, 00:14:57.378 "avg_latency_us": 95.01274112803085, 00:14:57.378 "min_latency_us": 43.054545454545455, 00:14:57.378 "max_latency_us": 1846.9236363636364 00:14:57.378 } 00:14:57.378 ], 00:14:57.378 "core_count": 1 00:14:57.378 } 00:14:57.378 [2024-12-06 06:41:15.975135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.378 [2024-12-06 06:41:15.975318] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:57.378 [2024-12-06 06:41:15.975348] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:57.378 06:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.378 06:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69506 00:14:57.378 06:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69506 ']' 00:14:57.378 06:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69506 00:14:57.378 06:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:14:57.378 06:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:57.378 06:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69506 00:14:57.378 killing process with pid 69506 00:14:57.378 06:41:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:57.378 06:41:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:57.378 06:41:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69506' 00:14:57.378 06:41:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69506 00:14:57.378 [2024-12-06 06:41:16.014054] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:57.378 06:41:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69506 00:14:57.636 [2024-12-06 06:41:16.221515] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:59.012 06:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.94oI0UC82o 00:14:59.012 06:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:59.012 06:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:59.012 06:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:59.012 06:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:59.012 06:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:59.012 06:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:59.012 06:41:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:59.012 00:14:59.012 real 0m4.736s 00:14:59.012 user 0m5.860s 00:14:59.012 sys 0m0.602s 00:14:59.012 06:41:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:59.012 ************************************ 00:14:59.012 END TEST raid_write_error_test 00:14:59.012 ************************************ 00:14:59.012 06:41:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.012 06:41:17 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:14:59.012 06:41:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:59.012 06:41:17 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:14:59.012 06:41:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:59.012 06:41:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:59.012 06:41:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:59.012 ************************************ 00:14:59.012 START TEST raid_state_function_test 00:14:59.012 ************************************ 00:14:59.012 06:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:14:59.012 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:14:59.012 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:59.012 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:59.012 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:59.012 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:59.012 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:59.012 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:59.012 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:59.012 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:59.012 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:59.012 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:59.012 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:59.012 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:59.012 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:59.012 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:59.012 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:59.012 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:59.012 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:59.013 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:59.013 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:59.013 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:59.013 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:59.013 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:59.013 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:59.013 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:14:59.013 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:59.013 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:59.013 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:59.013 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:59.013 Process raid pid: 69650 00:14:59.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.013 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69650 00:14:59.013 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:59.013 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69650' 00:14:59.013 06:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69650 00:14:59.013 06:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69650 ']' 00:14:59.013 06:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.013 06:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:59.013 06:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.013 06:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:59.013 06:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.013 [2024-12-06 06:41:17.491129] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:14:59.013 [2024-12-06 06:41:17.491310] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.269 [2024-12-06 06:41:17.676317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.270 [2024-12-06 06:41:17.808850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.527 [2024-12-06 06:41:18.018414] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.527 [2024-12-06 06:41:18.018474] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:00.092 06:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:00.092 06:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:00.092 06:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:00.092 06:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.092 06:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.092 [2024-12-06 06:41:18.510998] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:00.092 [2024-12-06 06:41:18.511069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:00.092 [2024-12-06 06:41:18.511086] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:00.092 [2024-12-06 06:41:18.511102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:00.092 [2024-12-06 06:41:18.511112] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:00.092 [2024-12-06 06:41:18.511127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:00.092 [2024-12-06 06:41:18.511136] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:00.092 [2024-12-06 06:41:18.511150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:00.092 06:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.092 06:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:00.092 06:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.092 06:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.092 06:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:00.092 06:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.092 06:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:00.092 06:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.092 06:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.092 06:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.092 06:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.092 06:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.092 06:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.092 06:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.092 06:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.092 06:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.092 06:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.092 "name": "Existed_Raid", 00:15:00.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.092 "strip_size_kb": 64, 00:15:00.092 "state": "configuring", 00:15:00.092 "raid_level": "raid0", 00:15:00.092 "superblock": false, 00:15:00.092 "num_base_bdevs": 4, 00:15:00.092 "num_base_bdevs_discovered": 0, 00:15:00.092 "num_base_bdevs_operational": 4, 00:15:00.092 "base_bdevs_list": [ 00:15:00.092 { 00:15:00.092 "name": "BaseBdev1", 00:15:00.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.092 "is_configured": false, 00:15:00.092 "data_offset": 0, 00:15:00.092 "data_size": 0 00:15:00.092 }, 00:15:00.092 { 00:15:00.092 "name": "BaseBdev2", 00:15:00.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.092 "is_configured": false, 00:15:00.092 "data_offset": 0, 00:15:00.092 "data_size": 0 00:15:00.092 }, 00:15:00.092 { 00:15:00.092 "name": "BaseBdev3", 00:15:00.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.092 "is_configured": false, 00:15:00.092 "data_offset": 0, 00:15:00.092 "data_size": 0 00:15:00.092 }, 00:15:00.092 { 00:15:00.092 "name": "BaseBdev4", 00:15:00.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.092 "is_configured": false, 00:15:00.092 "data_offset": 0, 00:15:00.092 "data_size": 0 00:15:00.092 } 00:15:00.092 ] 00:15:00.092 }' 00:15:00.092 06:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.092 06:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.658 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:00.658 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.658 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.658 [2024-12-06 06:41:19.011067] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:00.658 [2024-12-06 06:41:19.011122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:00.658 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.658 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:00.658 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.658 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.658 [2024-12-06 06:41:19.019054] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:00.658 [2024-12-06 06:41:19.019107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:00.658 [2024-12-06 06:41:19.019121] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:00.659 [2024-12-06 06:41:19.019137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:00.659 [2024-12-06 06:41:19.019147] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:00.659 [2024-12-06 06:41:19.019160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:00.659 [2024-12-06 06:41:19.019170] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:00.659 [2024-12-06 06:41:19.019184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.659 [2024-12-06 06:41:19.064801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:00.659 BaseBdev1 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.659 [ 00:15:00.659 { 00:15:00.659 "name": "BaseBdev1", 00:15:00.659 "aliases": [ 00:15:00.659 "9e5178e3-7ba1-4b63-9979-62b4f7c5dd56" 00:15:00.659 ], 00:15:00.659 "product_name": "Malloc disk", 00:15:00.659 "block_size": 512, 00:15:00.659 "num_blocks": 65536, 00:15:00.659 "uuid": "9e5178e3-7ba1-4b63-9979-62b4f7c5dd56", 00:15:00.659 "assigned_rate_limits": { 00:15:00.659 "rw_ios_per_sec": 0, 00:15:00.659 "rw_mbytes_per_sec": 0, 00:15:00.659 "r_mbytes_per_sec": 0, 00:15:00.659 "w_mbytes_per_sec": 0 00:15:00.659 }, 00:15:00.659 "claimed": true, 00:15:00.659 "claim_type": "exclusive_write", 00:15:00.659 "zoned": false, 00:15:00.659 "supported_io_types": { 00:15:00.659 "read": true, 00:15:00.659 "write": true, 00:15:00.659 "unmap": true, 00:15:00.659 "flush": true, 00:15:00.659 "reset": true, 00:15:00.659 "nvme_admin": false, 00:15:00.659 "nvme_io": false, 00:15:00.659 "nvme_io_md": false, 00:15:00.659 "write_zeroes": true, 00:15:00.659 "zcopy": true, 00:15:00.659 "get_zone_info": false, 00:15:00.659 "zone_management": false, 00:15:00.659 "zone_append": false, 00:15:00.659 "compare": false, 00:15:00.659 "compare_and_write": false, 00:15:00.659 "abort": true, 00:15:00.659 "seek_hole": false, 00:15:00.659 "seek_data": false, 00:15:00.659 "copy": true, 00:15:00.659 "nvme_iov_md": false 00:15:00.659 }, 00:15:00.659 "memory_domains": [ 00:15:00.659 { 00:15:00.659 "dma_device_id": "system", 00:15:00.659 "dma_device_type": 1 00:15:00.659 }, 00:15:00.659 { 00:15:00.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.659 "dma_device_type": 2 00:15:00.659 } 00:15:00.659 ], 00:15:00.659 "driver_specific": {} 00:15:00.659 } 00:15:00.659 ] 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.659 "name": "Existed_Raid", 00:15:00.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.659 "strip_size_kb": 64, 00:15:00.659 "state": "configuring", 00:15:00.659 "raid_level": "raid0", 00:15:00.659 "superblock": false, 00:15:00.659 "num_base_bdevs": 4, 00:15:00.659 "num_base_bdevs_discovered": 1, 00:15:00.659 "num_base_bdevs_operational": 4, 00:15:00.659 "base_bdevs_list": [ 00:15:00.659 { 00:15:00.659 "name": "BaseBdev1", 00:15:00.659 "uuid": "9e5178e3-7ba1-4b63-9979-62b4f7c5dd56", 00:15:00.659 "is_configured": true, 00:15:00.659 "data_offset": 0, 00:15:00.659 "data_size": 65536 00:15:00.659 }, 00:15:00.659 { 00:15:00.659 "name": "BaseBdev2", 00:15:00.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.659 "is_configured": false, 00:15:00.659 "data_offset": 0, 00:15:00.659 "data_size": 0 00:15:00.659 }, 00:15:00.659 { 00:15:00.659 "name": "BaseBdev3", 00:15:00.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.659 "is_configured": false, 00:15:00.659 "data_offset": 0, 00:15:00.659 "data_size": 0 00:15:00.659 }, 00:15:00.659 { 00:15:00.659 "name": "BaseBdev4", 00:15:00.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.659 "is_configured": false, 00:15:00.659 "data_offset": 0, 00:15:00.659 "data_size": 0 00:15:00.659 } 00:15:00.659 ] 00:15:00.659 }' 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.659 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.917 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:00.918 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.918 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.918 [2024-12-06 06:41:19.552973] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:00.918 [2024-12-06 06:41:19.553043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:00.918 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.918 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:00.918 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.918 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.918 [2024-12-06 06:41:19.561025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:01.176 [2024-12-06 06:41:19.563442] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:01.176 [2024-12-06 06:41:19.563498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:01.176 [2024-12-06 06:41:19.563514] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:01.176 [2024-12-06 06:41:19.563547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:01.176 [2024-12-06 06:41:19.563560] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:01.176 [2024-12-06 06:41:19.563573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:01.176 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.176 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:01.176 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:01.176 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:01.176 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.176 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.176 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:01.176 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.176 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:01.176 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.176 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.176 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.176 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.176 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.176 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.176 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.176 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.176 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.176 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.176 "name": "Existed_Raid", 00:15:01.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.176 "strip_size_kb": 64, 00:15:01.176 "state": "configuring", 00:15:01.176 "raid_level": "raid0", 00:15:01.176 "superblock": false, 00:15:01.176 "num_base_bdevs": 4, 00:15:01.176 "num_base_bdevs_discovered": 1, 00:15:01.176 "num_base_bdevs_operational": 4, 00:15:01.176 "base_bdevs_list": [ 00:15:01.176 { 00:15:01.176 "name": "BaseBdev1", 00:15:01.176 "uuid": "9e5178e3-7ba1-4b63-9979-62b4f7c5dd56", 00:15:01.176 "is_configured": true, 00:15:01.176 "data_offset": 0, 00:15:01.176 "data_size": 65536 00:15:01.176 }, 00:15:01.176 { 00:15:01.176 "name": "BaseBdev2", 00:15:01.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.176 "is_configured": false, 00:15:01.176 "data_offset": 0, 00:15:01.176 "data_size": 0 00:15:01.176 }, 00:15:01.176 { 00:15:01.176 "name": "BaseBdev3", 00:15:01.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.176 "is_configured": false, 00:15:01.176 "data_offset": 0, 00:15:01.176 "data_size": 0 00:15:01.176 }, 00:15:01.176 { 00:15:01.176 "name": "BaseBdev4", 00:15:01.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.176 "is_configured": false, 00:15:01.176 "data_offset": 0, 00:15:01.176 "data_size": 0 00:15:01.176 } 00:15:01.176 ] 00:15:01.176 }' 00:15:01.176 06:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.176 06:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.434 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:01.434 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.434 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.691 [2024-12-06 06:41:20.095833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:01.692 BaseBdev2 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.692 [ 00:15:01.692 { 00:15:01.692 "name": "BaseBdev2", 00:15:01.692 "aliases": [ 00:15:01.692 "b27815ce-d68c-4bde-bb6e-0fc1fdf4b117" 00:15:01.692 ], 00:15:01.692 "product_name": "Malloc disk", 00:15:01.692 "block_size": 512, 00:15:01.692 "num_blocks": 65536, 00:15:01.692 "uuid": "b27815ce-d68c-4bde-bb6e-0fc1fdf4b117", 00:15:01.692 "assigned_rate_limits": { 00:15:01.692 "rw_ios_per_sec": 0, 00:15:01.692 "rw_mbytes_per_sec": 0, 00:15:01.692 "r_mbytes_per_sec": 0, 00:15:01.692 "w_mbytes_per_sec": 0 00:15:01.692 }, 00:15:01.692 "claimed": true, 00:15:01.692 "claim_type": "exclusive_write", 00:15:01.692 "zoned": false, 00:15:01.692 "supported_io_types": { 00:15:01.692 "read": true, 00:15:01.692 "write": true, 00:15:01.692 "unmap": true, 00:15:01.692 "flush": true, 00:15:01.692 "reset": true, 00:15:01.692 "nvme_admin": false, 00:15:01.692 "nvme_io": false, 00:15:01.692 "nvme_io_md": false, 00:15:01.692 "write_zeroes": true, 00:15:01.692 "zcopy": true, 00:15:01.692 "get_zone_info": false, 00:15:01.692 "zone_management": false, 00:15:01.692 "zone_append": false, 00:15:01.692 "compare": false, 00:15:01.692 "compare_and_write": false, 00:15:01.692 "abort": true, 00:15:01.692 "seek_hole": false, 00:15:01.692 "seek_data": false, 00:15:01.692 "copy": true, 00:15:01.692 "nvme_iov_md": false 00:15:01.692 }, 00:15:01.692 "memory_domains": [ 00:15:01.692 { 00:15:01.692 "dma_device_id": "system", 00:15:01.692 "dma_device_type": 1 00:15:01.692 }, 00:15:01.692 { 00:15:01.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.692 "dma_device_type": 2 00:15:01.692 } 00:15:01.692 ], 00:15:01.692 "driver_specific": {} 00:15:01.692 } 00:15:01.692 ] 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.692 "name": "Existed_Raid", 00:15:01.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.692 "strip_size_kb": 64, 00:15:01.692 "state": "configuring", 00:15:01.692 "raid_level": "raid0", 00:15:01.692 "superblock": false, 00:15:01.692 "num_base_bdevs": 4, 00:15:01.692 "num_base_bdevs_discovered": 2, 00:15:01.692 "num_base_bdevs_operational": 4, 00:15:01.692 "base_bdevs_list": [ 00:15:01.692 { 00:15:01.692 "name": "BaseBdev1", 00:15:01.692 "uuid": "9e5178e3-7ba1-4b63-9979-62b4f7c5dd56", 00:15:01.692 "is_configured": true, 00:15:01.692 "data_offset": 0, 00:15:01.692 "data_size": 65536 00:15:01.692 }, 00:15:01.692 { 00:15:01.692 "name": "BaseBdev2", 00:15:01.692 "uuid": "b27815ce-d68c-4bde-bb6e-0fc1fdf4b117", 00:15:01.692 "is_configured": true, 00:15:01.692 "data_offset": 0, 00:15:01.692 "data_size": 65536 00:15:01.692 }, 00:15:01.692 { 00:15:01.692 "name": "BaseBdev3", 00:15:01.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.692 "is_configured": false, 00:15:01.692 "data_offset": 0, 00:15:01.692 "data_size": 0 00:15:01.692 }, 00:15:01.692 { 00:15:01.692 "name": "BaseBdev4", 00:15:01.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.692 "is_configured": false, 00:15:01.692 "data_offset": 0, 00:15:01.692 "data_size": 0 00:15:01.692 } 00:15:01.692 ] 00:15:01.692 }' 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.692 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.259 [2024-12-06 06:41:20.701903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:02.259 BaseBdev3 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.259 [ 00:15:02.259 { 00:15:02.259 "name": "BaseBdev3", 00:15:02.259 "aliases": [ 00:15:02.259 "624aeb3e-db11-4487-9ef1-85d980fca78b" 00:15:02.259 ], 00:15:02.259 "product_name": "Malloc disk", 00:15:02.259 "block_size": 512, 00:15:02.259 "num_blocks": 65536, 00:15:02.259 "uuid": "624aeb3e-db11-4487-9ef1-85d980fca78b", 00:15:02.259 "assigned_rate_limits": { 00:15:02.259 "rw_ios_per_sec": 0, 00:15:02.259 "rw_mbytes_per_sec": 0, 00:15:02.259 "r_mbytes_per_sec": 0, 00:15:02.259 "w_mbytes_per_sec": 0 00:15:02.259 }, 00:15:02.259 "claimed": true, 00:15:02.259 "claim_type": "exclusive_write", 00:15:02.259 "zoned": false, 00:15:02.259 "supported_io_types": { 00:15:02.259 "read": true, 00:15:02.259 "write": true, 00:15:02.259 "unmap": true, 00:15:02.259 "flush": true, 00:15:02.259 "reset": true, 00:15:02.259 "nvme_admin": false, 00:15:02.259 "nvme_io": false, 00:15:02.259 "nvme_io_md": false, 00:15:02.259 "write_zeroes": true, 00:15:02.259 "zcopy": true, 00:15:02.259 "get_zone_info": false, 00:15:02.259 "zone_management": false, 00:15:02.259 "zone_append": false, 00:15:02.259 "compare": false, 00:15:02.259 "compare_and_write": false, 00:15:02.259 "abort": true, 00:15:02.259 "seek_hole": false, 00:15:02.259 "seek_data": false, 00:15:02.259 "copy": true, 00:15:02.259 "nvme_iov_md": false 00:15:02.259 }, 00:15:02.259 "memory_domains": [ 00:15:02.259 { 00:15:02.259 "dma_device_id": "system", 00:15:02.259 "dma_device_type": 1 00:15:02.259 }, 00:15:02.259 { 00:15:02.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.259 "dma_device_type": 2 00:15:02.259 } 00:15:02.259 ], 00:15:02.259 "driver_specific": {} 00:15:02.259 } 00:15:02.259 ] 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.259 "name": "Existed_Raid", 00:15:02.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.259 "strip_size_kb": 64, 00:15:02.259 "state": "configuring", 00:15:02.259 "raid_level": "raid0", 00:15:02.259 "superblock": false, 00:15:02.259 "num_base_bdevs": 4, 00:15:02.259 "num_base_bdevs_discovered": 3, 00:15:02.259 "num_base_bdevs_operational": 4, 00:15:02.259 "base_bdevs_list": [ 00:15:02.259 { 00:15:02.259 "name": "BaseBdev1", 00:15:02.259 "uuid": "9e5178e3-7ba1-4b63-9979-62b4f7c5dd56", 00:15:02.259 "is_configured": true, 00:15:02.259 "data_offset": 0, 00:15:02.259 "data_size": 65536 00:15:02.259 }, 00:15:02.259 { 00:15:02.259 "name": "BaseBdev2", 00:15:02.259 "uuid": "b27815ce-d68c-4bde-bb6e-0fc1fdf4b117", 00:15:02.259 "is_configured": true, 00:15:02.259 "data_offset": 0, 00:15:02.259 "data_size": 65536 00:15:02.259 }, 00:15:02.259 { 00:15:02.259 "name": "BaseBdev3", 00:15:02.259 "uuid": "624aeb3e-db11-4487-9ef1-85d980fca78b", 00:15:02.259 "is_configured": true, 00:15:02.259 "data_offset": 0, 00:15:02.259 "data_size": 65536 00:15:02.259 }, 00:15:02.259 { 00:15:02.259 "name": "BaseBdev4", 00:15:02.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.259 "is_configured": false, 00:15:02.259 "data_offset": 0, 00:15:02.259 "data_size": 0 00:15:02.259 } 00:15:02.259 ] 00:15:02.259 }' 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.259 06:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.826 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:02.826 06:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.826 06:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.826 [2024-12-06 06:41:21.281047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:02.826 [2024-12-06 06:41:21.281126] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:02.826 [2024-12-06 06:41:21.281141] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:02.826 [2024-12-06 06:41:21.281521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:02.826 [2024-12-06 06:41:21.281768] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:02.826 [2024-12-06 06:41:21.281801] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:02.826 [2024-12-06 06:41:21.282122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.826 BaseBdev4 00:15:02.826 06:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.826 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:02.826 06:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:02.826 06:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:02.826 06:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:02.826 06:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:02.826 06:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:02.826 06:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:02.826 06:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.826 06:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.826 06:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.826 06:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:02.826 06:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.826 06:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.826 [ 00:15:02.826 { 00:15:02.826 "name": "BaseBdev4", 00:15:02.826 "aliases": [ 00:15:02.826 "7e59e118-eee9-4c4c-be6b-0b694ffe13a1" 00:15:02.826 ], 00:15:02.826 "product_name": "Malloc disk", 00:15:02.826 "block_size": 512, 00:15:02.826 "num_blocks": 65536, 00:15:02.826 "uuid": "7e59e118-eee9-4c4c-be6b-0b694ffe13a1", 00:15:02.826 "assigned_rate_limits": { 00:15:02.826 "rw_ios_per_sec": 0, 00:15:02.826 "rw_mbytes_per_sec": 0, 00:15:02.826 "r_mbytes_per_sec": 0, 00:15:02.826 "w_mbytes_per_sec": 0 00:15:02.826 }, 00:15:02.826 "claimed": true, 00:15:02.826 "claim_type": "exclusive_write", 00:15:02.826 "zoned": false, 00:15:02.826 "supported_io_types": { 00:15:02.826 "read": true, 00:15:02.826 "write": true, 00:15:02.826 "unmap": true, 00:15:02.826 "flush": true, 00:15:02.826 "reset": true, 00:15:02.826 "nvme_admin": false, 00:15:02.826 "nvme_io": false, 00:15:02.826 "nvme_io_md": false, 00:15:02.826 "write_zeroes": true, 00:15:02.826 "zcopy": true, 00:15:02.826 "get_zone_info": false, 00:15:02.826 "zone_management": false, 00:15:02.826 "zone_append": false, 00:15:02.826 "compare": false, 00:15:02.826 "compare_and_write": false, 00:15:02.826 "abort": true, 00:15:02.826 "seek_hole": false, 00:15:02.826 "seek_data": false, 00:15:02.826 "copy": true, 00:15:02.826 "nvme_iov_md": false 00:15:02.826 }, 00:15:02.826 "memory_domains": [ 00:15:02.826 { 00:15:02.826 "dma_device_id": "system", 00:15:02.826 "dma_device_type": 1 00:15:02.826 }, 00:15:02.826 { 00:15:02.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.826 "dma_device_type": 2 00:15:02.826 } 00:15:02.826 ], 00:15:02.826 "driver_specific": {} 00:15:02.826 } 00:15:02.826 ] 00:15:02.826 06:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.826 06:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:02.826 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:02.826 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:02.826 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:02.826 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.826 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.826 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:02.826 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.826 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.827 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.827 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.827 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.827 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.827 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.827 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.827 06:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.827 06:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.827 06:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.827 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.827 "name": "Existed_Raid", 00:15:02.827 "uuid": "ae35372a-c1de-4845-8cbe-dc70749b59ce", 00:15:02.827 "strip_size_kb": 64, 00:15:02.827 "state": "online", 00:15:02.827 "raid_level": "raid0", 00:15:02.827 "superblock": false, 00:15:02.827 "num_base_bdevs": 4, 00:15:02.827 "num_base_bdevs_discovered": 4, 00:15:02.827 "num_base_bdevs_operational": 4, 00:15:02.827 "base_bdevs_list": [ 00:15:02.827 { 00:15:02.827 "name": "BaseBdev1", 00:15:02.827 "uuid": "9e5178e3-7ba1-4b63-9979-62b4f7c5dd56", 00:15:02.827 "is_configured": true, 00:15:02.827 "data_offset": 0, 00:15:02.827 "data_size": 65536 00:15:02.827 }, 00:15:02.827 { 00:15:02.827 "name": "BaseBdev2", 00:15:02.827 "uuid": "b27815ce-d68c-4bde-bb6e-0fc1fdf4b117", 00:15:02.827 "is_configured": true, 00:15:02.827 "data_offset": 0, 00:15:02.827 "data_size": 65536 00:15:02.827 }, 00:15:02.827 { 00:15:02.827 "name": "BaseBdev3", 00:15:02.827 "uuid": "624aeb3e-db11-4487-9ef1-85d980fca78b", 00:15:02.827 "is_configured": true, 00:15:02.827 "data_offset": 0, 00:15:02.827 "data_size": 65536 00:15:02.827 }, 00:15:02.827 { 00:15:02.827 "name": "BaseBdev4", 00:15:02.827 "uuid": "7e59e118-eee9-4c4c-be6b-0b694ffe13a1", 00:15:02.827 "is_configured": true, 00:15:02.827 "data_offset": 0, 00:15:02.827 "data_size": 65536 00:15:02.827 } 00:15:02.827 ] 00:15:02.827 }' 00:15:02.827 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.827 06:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.394 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:03.395 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:03.395 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:03.395 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:03.395 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:03.395 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:03.395 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:03.395 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:03.395 06:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.395 06:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.395 [2024-12-06 06:41:21.837757] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:03.395 06:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.395 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:03.395 "name": "Existed_Raid", 00:15:03.395 "aliases": [ 00:15:03.395 "ae35372a-c1de-4845-8cbe-dc70749b59ce" 00:15:03.395 ], 00:15:03.395 "product_name": "Raid Volume", 00:15:03.395 "block_size": 512, 00:15:03.395 "num_blocks": 262144, 00:15:03.395 "uuid": "ae35372a-c1de-4845-8cbe-dc70749b59ce", 00:15:03.395 "assigned_rate_limits": { 00:15:03.395 "rw_ios_per_sec": 0, 00:15:03.395 "rw_mbytes_per_sec": 0, 00:15:03.395 "r_mbytes_per_sec": 0, 00:15:03.395 "w_mbytes_per_sec": 0 00:15:03.395 }, 00:15:03.395 "claimed": false, 00:15:03.395 "zoned": false, 00:15:03.395 "supported_io_types": { 00:15:03.395 "read": true, 00:15:03.395 "write": true, 00:15:03.395 "unmap": true, 00:15:03.395 "flush": true, 00:15:03.395 "reset": true, 00:15:03.395 "nvme_admin": false, 00:15:03.395 "nvme_io": false, 00:15:03.395 "nvme_io_md": false, 00:15:03.395 "write_zeroes": true, 00:15:03.395 "zcopy": false, 00:15:03.395 "get_zone_info": false, 00:15:03.395 "zone_management": false, 00:15:03.395 "zone_append": false, 00:15:03.395 "compare": false, 00:15:03.395 "compare_and_write": false, 00:15:03.395 "abort": false, 00:15:03.395 "seek_hole": false, 00:15:03.395 "seek_data": false, 00:15:03.395 "copy": false, 00:15:03.395 "nvme_iov_md": false 00:15:03.395 }, 00:15:03.395 "memory_domains": [ 00:15:03.395 { 00:15:03.395 "dma_device_id": "system", 00:15:03.395 "dma_device_type": 1 00:15:03.395 }, 00:15:03.395 { 00:15:03.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.395 "dma_device_type": 2 00:15:03.395 }, 00:15:03.395 { 00:15:03.395 "dma_device_id": "system", 00:15:03.395 "dma_device_type": 1 00:15:03.395 }, 00:15:03.395 { 00:15:03.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.395 "dma_device_type": 2 00:15:03.395 }, 00:15:03.395 { 00:15:03.395 "dma_device_id": "system", 00:15:03.395 "dma_device_type": 1 00:15:03.395 }, 00:15:03.395 { 00:15:03.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.395 "dma_device_type": 2 00:15:03.395 }, 00:15:03.395 { 00:15:03.395 "dma_device_id": "system", 00:15:03.395 "dma_device_type": 1 00:15:03.395 }, 00:15:03.395 { 00:15:03.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.395 "dma_device_type": 2 00:15:03.395 } 00:15:03.395 ], 00:15:03.395 "driver_specific": { 00:15:03.395 "raid": { 00:15:03.395 "uuid": "ae35372a-c1de-4845-8cbe-dc70749b59ce", 00:15:03.395 "strip_size_kb": 64, 00:15:03.395 "state": "online", 00:15:03.395 "raid_level": "raid0", 00:15:03.395 "superblock": false, 00:15:03.395 "num_base_bdevs": 4, 00:15:03.395 "num_base_bdevs_discovered": 4, 00:15:03.395 "num_base_bdevs_operational": 4, 00:15:03.395 "base_bdevs_list": [ 00:15:03.395 { 00:15:03.395 "name": "BaseBdev1", 00:15:03.395 "uuid": "9e5178e3-7ba1-4b63-9979-62b4f7c5dd56", 00:15:03.395 "is_configured": true, 00:15:03.395 "data_offset": 0, 00:15:03.395 "data_size": 65536 00:15:03.395 }, 00:15:03.395 { 00:15:03.395 "name": "BaseBdev2", 00:15:03.395 "uuid": "b27815ce-d68c-4bde-bb6e-0fc1fdf4b117", 00:15:03.395 "is_configured": true, 00:15:03.395 "data_offset": 0, 00:15:03.395 "data_size": 65536 00:15:03.395 }, 00:15:03.395 { 00:15:03.395 "name": "BaseBdev3", 00:15:03.395 "uuid": "624aeb3e-db11-4487-9ef1-85d980fca78b", 00:15:03.395 "is_configured": true, 00:15:03.395 "data_offset": 0, 00:15:03.395 "data_size": 65536 00:15:03.395 }, 00:15:03.395 { 00:15:03.395 "name": "BaseBdev4", 00:15:03.395 "uuid": "7e59e118-eee9-4c4c-be6b-0b694ffe13a1", 00:15:03.395 "is_configured": true, 00:15:03.395 "data_offset": 0, 00:15:03.395 "data_size": 65536 00:15:03.395 } 00:15:03.395 ] 00:15:03.395 } 00:15:03.395 } 00:15:03.395 }' 00:15:03.395 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:03.395 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:03.395 BaseBdev2 00:15:03.395 BaseBdev3 00:15:03.395 BaseBdev4' 00:15:03.395 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.395 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:03.395 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.395 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:03.395 06:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.395 06:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.395 06:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.395 06:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.654 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.654 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.654 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.654 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.655 [2024-12-06 06:41:22.201508] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:03.655 [2024-12-06 06:41:22.201565] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:03.655 [2024-12-06 06:41:22.201638] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.655 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.914 06:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.914 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.914 "name": "Existed_Raid", 00:15:03.914 "uuid": "ae35372a-c1de-4845-8cbe-dc70749b59ce", 00:15:03.914 "strip_size_kb": 64, 00:15:03.914 "state": "offline", 00:15:03.914 "raid_level": "raid0", 00:15:03.914 "superblock": false, 00:15:03.914 "num_base_bdevs": 4, 00:15:03.914 "num_base_bdevs_discovered": 3, 00:15:03.914 "num_base_bdevs_operational": 3, 00:15:03.914 "base_bdevs_list": [ 00:15:03.914 { 00:15:03.914 "name": null, 00:15:03.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.914 "is_configured": false, 00:15:03.914 "data_offset": 0, 00:15:03.914 "data_size": 65536 00:15:03.914 }, 00:15:03.914 { 00:15:03.914 "name": "BaseBdev2", 00:15:03.914 "uuid": "b27815ce-d68c-4bde-bb6e-0fc1fdf4b117", 00:15:03.914 "is_configured": true, 00:15:03.914 "data_offset": 0, 00:15:03.914 "data_size": 65536 00:15:03.914 }, 00:15:03.914 { 00:15:03.914 "name": "BaseBdev3", 00:15:03.914 "uuid": "624aeb3e-db11-4487-9ef1-85d980fca78b", 00:15:03.914 "is_configured": true, 00:15:03.914 "data_offset": 0, 00:15:03.914 "data_size": 65536 00:15:03.914 }, 00:15:03.914 { 00:15:03.914 "name": "BaseBdev4", 00:15:03.914 "uuid": "7e59e118-eee9-4c4c-be6b-0b694ffe13a1", 00:15:03.914 "is_configured": true, 00:15:03.914 "data_offset": 0, 00:15:03.914 "data_size": 65536 00:15:03.914 } 00:15:03.914 ] 00:15:03.914 }' 00:15:03.914 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.914 06:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.481 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:04.481 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:04.481 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.481 06:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.481 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:04.481 06:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.481 06:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.481 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:04.481 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:04.481 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:04.481 06:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.481 06:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.481 [2024-12-06 06:41:22.888594] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:04.481 06:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.481 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:04.481 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:04.481 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.481 06:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:04.481 06:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.481 06:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.481 06:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.481 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:04.481 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:04.481 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:04.481 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.481 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.481 [2024-12-06 06:41:23.035261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:04.481 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.481 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:04.481 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.740 [2024-12-06 06:41:23.177719] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:04.740 [2024-12-06 06:41:23.177909] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.740 BaseBdev2 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.740 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.740 [ 00:15:04.740 { 00:15:04.740 "name": "BaseBdev2", 00:15:04.740 "aliases": [ 00:15:04.740 "c3c7b7fb-ca86-447c-95da-bd34aeb32f6a" 00:15:04.740 ], 00:15:04.740 "product_name": "Malloc disk", 00:15:04.740 "block_size": 512, 00:15:04.740 "num_blocks": 65536, 00:15:04.740 "uuid": "c3c7b7fb-ca86-447c-95da-bd34aeb32f6a", 00:15:04.740 "assigned_rate_limits": { 00:15:04.740 "rw_ios_per_sec": 0, 00:15:04.740 "rw_mbytes_per_sec": 0, 00:15:04.740 "r_mbytes_per_sec": 0, 00:15:04.740 "w_mbytes_per_sec": 0 00:15:04.740 }, 00:15:04.740 "claimed": false, 00:15:04.740 "zoned": false, 00:15:04.740 "supported_io_types": { 00:15:04.741 "read": true, 00:15:04.741 "write": true, 00:15:04.741 "unmap": true, 00:15:04.741 "flush": true, 00:15:04.741 "reset": true, 00:15:04.741 "nvme_admin": false, 00:15:04.741 "nvme_io": false, 00:15:04.741 "nvme_io_md": false, 00:15:04.741 "write_zeroes": true, 00:15:04.741 "zcopy": true, 00:15:04.741 "get_zone_info": false, 00:15:04.741 "zone_management": false, 00:15:04.741 "zone_append": false, 00:15:04.741 "compare": false, 00:15:04.741 "compare_and_write": false, 00:15:04.741 "abort": true, 00:15:04.741 "seek_hole": false, 00:15:04.741 "seek_data": false, 00:15:04.741 "copy": true, 00:15:04.741 "nvme_iov_md": false 00:15:04.741 }, 00:15:04.741 "memory_domains": [ 00:15:04.741 { 00:15:04.741 "dma_device_id": "system", 00:15:05.000 "dma_device_type": 1 00:15:05.000 }, 00:15:05.000 { 00:15:05.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.000 "dma_device_type": 2 00:15:05.000 } 00:15:05.000 ], 00:15:05.000 "driver_specific": {} 00:15:05.000 } 00:15:05.000 ] 00:15:05.000 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.000 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:05.000 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:05.000 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:05.000 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:05.000 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.000 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.000 BaseBdev3 00:15:05.000 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.000 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:05.000 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:05.000 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:05.000 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:05.000 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:05.000 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:05.000 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:05.000 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.000 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.000 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.000 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:05.000 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.000 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.000 [ 00:15:05.000 { 00:15:05.000 "name": "BaseBdev3", 00:15:05.000 "aliases": [ 00:15:05.000 "53635c37-4884-4a85-a0c4-a98e6d806480" 00:15:05.000 ], 00:15:05.000 "product_name": "Malloc disk", 00:15:05.000 "block_size": 512, 00:15:05.000 "num_blocks": 65536, 00:15:05.000 "uuid": "53635c37-4884-4a85-a0c4-a98e6d806480", 00:15:05.000 "assigned_rate_limits": { 00:15:05.000 "rw_ios_per_sec": 0, 00:15:05.000 "rw_mbytes_per_sec": 0, 00:15:05.000 "r_mbytes_per_sec": 0, 00:15:05.000 "w_mbytes_per_sec": 0 00:15:05.000 }, 00:15:05.000 "claimed": false, 00:15:05.000 "zoned": false, 00:15:05.000 "supported_io_types": { 00:15:05.000 "read": true, 00:15:05.000 "write": true, 00:15:05.000 "unmap": true, 00:15:05.000 "flush": true, 00:15:05.000 "reset": true, 00:15:05.000 "nvme_admin": false, 00:15:05.000 "nvme_io": false, 00:15:05.000 "nvme_io_md": false, 00:15:05.000 "write_zeroes": true, 00:15:05.000 "zcopy": true, 00:15:05.000 "get_zone_info": false, 00:15:05.000 "zone_management": false, 00:15:05.000 "zone_append": false, 00:15:05.000 "compare": false, 00:15:05.000 "compare_and_write": false, 00:15:05.000 "abort": true, 00:15:05.000 "seek_hole": false, 00:15:05.000 "seek_data": false, 00:15:05.000 "copy": true, 00:15:05.000 "nvme_iov_md": false 00:15:05.001 }, 00:15:05.001 "memory_domains": [ 00:15:05.001 { 00:15:05.001 "dma_device_id": "system", 00:15:05.001 "dma_device_type": 1 00:15:05.001 }, 00:15:05.001 { 00:15:05.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.001 "dma_device_type": 2 00:15:05.001 } 00:15:05.001 ], 00:15:05.001 "driver_specific": {} 00:15:05.001 } 00:15:05.001 ] 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.001 BaseBdev4 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.001 [ 00:15:05.001 { 00:15:05.001 "name": "BaseBdev4", 00:15:05.001 "aliases": [ 00:15:05.001 "529ca66b-8a6a-4e6a-b4ef-2854fa967754" 00:15:05.001 ], 00:15:05.001 "product_name": "Malloc disk", 00:15:05.001 "block_size": 512, 00:15:05.001 "num_blocks": 65536, 00:15:05.001 "uuid": "529ca66b-8a6a-4e6a-b4ef-2854fa967754", 00:15:05.001 "assigned_rate_limits": { 00:15:05.001 "rw_ios_per_sec": 0, 00:15:05.001 "rw_mbytes_per_sec": 0, 00:15:05.001 "r_mbytes_per_sec": 0, 00:15:05.001 "w_mbytes_per_sec": 0 00:15:05.001 }, 00:15:05.001 "claimed": false, 00:15:05.001 "zoned": false, 00:15:05.001 "supported_io_types": { 00:15:05.001 "read": true, 00:15:05.001 "write": true, 00:15:05.001 "unmap": true, 00:15:05.001 "flush": true, 00:15:05.001 "reset": true, 00:15:05.001 "nvme_admin": false, 00:15:05.001 "nvme_io": false, 00:15:05.001 "nvme_io_md": false, 00:15:05.001 "write_zeroes": true, 00:15:05.001 "zcopy": true, 00:15:05.001 "get_zone_info": false, 00:15:05.001 "zone_management": false, 00:15:05.001 "zone_append": false, 00:15:05.001 "compare": false, 00:15:05.001 "compare_and_write": false, 00:15:05.001 "abort": true, 00:15:05.001 "seek_hole": false, 00:15:05.001 "seek_data": false, 00:15:05.001 "copy": true, 00:15:05.001 "nvme_iov_md": false 00:15:05.001 }, 00:15:05.001 "memory_domains": [ 00:15:05.001 { 00:15:05.001 "dma_device_id": "system", 00:15:05.001 "dma_device_type": 1 00:15:05.001 }, 00:15:05.001 { 00:15:05.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.001 "dma_device_type": 2 00:15:05.001 } 00:15:05.001 ], 00:15:05.001 "driver_specific": {} 00:15:05.001 } 00:15:05.001 ] 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.001 [2024-12-06 06:41:23.547384] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:05.001 [2024-12-06 06:41:23.547446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:05.001 [2024-12-06 06:41:23.547485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:05.001 [2024-12-06 06:41:23.550031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:05.001 [2024-12-06 06:41:23.550248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.001 "name": "Existed_Raid", 00:15:05.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.001 "strip_size_kb": 64, 00:15:05.001 "state": "configuring", 00:15:05.001 "raid_level": "raid0", 00:15:05.001 "superblock": false, 00:15:05.001 "num_base_bdevs": 4, 00:15:05.001 "num_base_bdevs_discovered": 3, 00:15:05.001 "num_base_bdevs_operational": 4, 00:15:05.001 "base_bdevs_list": [ 00:15:05.001 { 00:15:05.001 "name": "BaseBdev1", 00:15:05.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.001 "is_configured": false, 00:15:05.001 "data_offset": 0, 00:15:05.001 "data_size": 0 00:15:05.001 }, 00:15:05.001 { 00:15:05.001 "name": "BaseBdev2", 00:15:05.001 "uuid": "c3c7b7fb-ca86-447c-95da-bd34aeb32f6a", 00:15:05.001 "is_configured": true, 00:15:05.001 "data_offset": 0, 00:15:05.001 "data_size": 65536 00:15:05.001 }, 00:15:05.001 { 00:15:05.001 "name": "BaseBdev3", 00:15:05.001 "uuid": "53635c37-4884-4a85-a0c4-a98e6d806480", 00:15:05.001 "is_configured": true, 00:15:05.001 "data_offset": 0, 00:15:05.001 "data_size": 65536 00:15:05.001 }, 00:15:05.001 { 00:15:05.001 "name": "BaseBdev4", 00:15:05.001 "uuid": "529ca66b-8a6a-4e6a-b4ef-2854fa967754", 00:15:05.001 "is_configured": true, 00:15:05.001 "data_offset": 0, 00:15:05.001 "data_size": 65536 00:15:05.001 } 00:15:05.001 ] 00:15:05.001 }' 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.001 06:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.569 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:05.569 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.569 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.569 [2024-12-06 06:41:24.095565] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:05.569 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.569 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:05.569 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.569 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.569 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:05.569 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.569 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:05.569 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.569 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.569 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.569 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.569 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.569 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.569 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.569 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.569 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.569 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.569 "name": "Existed_Raid", 00:15:05.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.569 "strip_size_kb": 64, 00:15:05.569 "state": "configuring", 00:15:05.569 "raid_level": "raid0", 00:15:05.569 "superblock": false, 00:15:05.569 "num_base_bdevs": 4, 00:15:05.569 "num_base_bdevs_discovered": 2, 00:15:05.569 "num_base_bdevs_operational": 4, 00:15:05.569 "base_bdevs_list": [ 00:15:05.569 { 00:15:05.569 "name": "BaseBdev1", 00:15:05.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.569 "is_configured": false, 00:15:05.569 "data_offset": 0, 00:15:05.569 "data_size": 0 00:15:05.569 }, 00:15:05.569 { 00:15:05.570 "name": null, 00:15:05.570 "uuid": "c3c7b7fb-ca86-447c-95da-bd34aeb32f6a", 00:15:05.570 "is_configured": false, 00:15:05.570 "data_offset": 0, 00:15:05.570 "data_size": 65536 00:15:05.570 }, 00:15:05.570 { 00:15:05.570 "name": "BaseBdev3", 00:15:05.570 "uuid": "53635c37-4884-4a85-a0c4-a98e6d806480", 00:15:05.570 "is_configured": true, 00:15:05.570 "data_offset": 0, 00:15:05.570 "data_size": 65536 00:15:05.570 }, 00:15:05.570 { 00:15:05.570 "name": "BaseBdev4", 00:15:05.570 "uuid": "529ca66b-8a6a-4e6a-b4ef-2854fa967754", 00:15:05.570 "is_configured": true, 00:15:05.570 "data_offset": 0, 00:15:05.570 "data_size": 65536 00:15:05.570 } 00:15:05.570 ] 00:15:05.570 }' 00:15:05.570 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.570 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.137 [2024-12-06 06:41:24.673830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:06.137 BaseBdev1 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.137 [ 00:15:06.137 { 00:15:06.137 "name": "BaseBdev1", 00:15:06.137 "aliases": [ 00:15:06.137 "906650e9-2bf3-4028-819e-919a2f2355cf" 00:15:06.137 ], 00:15:06.137 "product_name": "Malloc disk", 00:15:06.137 "block_size": 512, 00:15:06.137 "num_blocks": 65536, 00:15:06.137 "uuid": "906650e9-2bf3-4028-819e-919a2f2355cf", 00:15:06.137 "assigned_rate_limits": { 00:15:06.137 "rw_ios_per_sec": 0, 00:15:06.137 "rw_mbytes_per_sec": 0, 00:15:06.137 "r_mbytes_per_sec": 0, 00:15:06.137 "w_mbytes_per_sec": 0 00:15:06.137 }, 00:15:06.137 "claimed": true, 00:15:06.137 "claim_type": "exclusive_write", 00:15:06.137 "zoned": false, 00:15:06.137 "supported_io_types": { 00:15:06.137 "read": true, 00:15:06.137 "write": true, 00:15:06.137 "unmap": true, 00:15:06.137 "flush": true, 00:15:06.137 "reset": true, 00:15:06.137 "nvme_admin": false, 00:15:06.137 "nvme_io": false, 00:15:06.137 "nvme_io_md": false, 00:15:06.137 "write_zeroes": true, 00:15:06.137 "zcopy": true, 00:15:06.137 "get_zone_info": false, 00:15:06.137 "zone_management": false, 00:15:06.137 "zone_append": false, 00:15:06.137 "compare": false, 00:15:06.137 "compare_and_write": false, 00:15:06.137 "abort": true, 00:15:06.137 "seek_hole": false, 00:15:06.137 "seek_data": false, 00:15:06.137 "copy": true, 00:15:06.137 "nvme_iov_md": false 00:15:06.137 }, 00:15:06.137 "memory_domains": [ 00:15:06.137 { 00:15:06.137 "dma_device_id": "system", 00:15:06.137 "dma_device_type": 1 00:15:06.137 }, 00:15:06.137 { 00:15:06.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.137 "dma_device_type": 2 00:15:06.137 } 00:15:06.137 ], 00:15:06.137 "driver_specific": {} 00:15:06.137 } 00:15:06.137 ] 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.137 "name": "Existed_Raid", 00:15:06.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.137 "strip_size_kb": 64, 00:15:06.137 "state": "configuring", 00:15:06.137 "raid_level": "raid0", 00:15:06.137 "superblock": false, 00:15:06.137 "num_base_bdevs": 4, 00:15:06.137 "num_base_bdevs_discovered": 3, 00:15:06.137 "num_base_bdevs_operational": 4, 00:15:06.137 "base_bdevs_list": [ 00:15:06.137 { 00:15:06.137 "name": "BaseBdev1", 00:15:06.137 "uuid": "906650e9-2bf3-4028-819e-919a2f2355cf", 00:15:06.137 "is_configured": true, 00:15:06.137 "data_offset": 0, 00:15:06.137 "data_size": 65536 00:15:06.137 }, 00:15:06.137 { 00:15:06.137 "name": null, 00:15:06.137 "uuid": "c3c7b7fb-ca86-447c-95da-bd34aeb32f6a", 00:15:06.137 "is_configured": false, 00:15:06.137 "data_offset": 0, 00:15:06.137 "data_size": 65536 00:15:06.137 }, 00:15:06.137 { 00:15:06.137 "name": "BaseBdev3", 00:15:06.137 "uuid": "53635c37-4884-4a85-a0c4-a98e6d806480", 00:15:06.137 "is_configured": true, 00:15:06.137 "data_offset": 0, 00:15:06.137 "data_size": 65536 00:15:06.137 }, 00:15:06.137 { 00:15:06.137 "name": "BaseBdev4", 00:15:06.137 "uuid": "529ca66b-8a6a-4e6a-b4ef-2854fa967754", 00:15:06.137 "is_configured": true, 00:15:06.137 "data_offset": 0, 00:15:06.137 "data_size": 65536 00:15:06.137 } 00:15:06.137 ] 00:15:06.137 }' 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.137 06:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.703 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.703 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:06.703 06:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.703 06:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.703 06:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.703 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:06.703 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:06.703 06:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.703 06:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.703 [2024-12-06 06:41:25.270106] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:06.703 06:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.703 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:06.703 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.703 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.703 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:06.703 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.703 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:06.703 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.703 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.703 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.703 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.703 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.703 06:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.703 06:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.703 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.703 06:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.703 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.703 "name": "Existed_Raid", 00:15:06.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.703 "strip_size_kb": 64, 00:15:06.703 "state": "configuring", 00:15:06.703 "raid_level": "raid0", 00:15:06.703 "superblock": false, 00:15:06.703 "num_base_bdevs": 4, 00:15:06.703 "num_base_bdevs_discovered": 2, 00:15:06.703 "num_base_bdevs_operational": 4, 00:15:06.703 "base_bdevs_list": [ 00:15:06.703 { 00:15:06.703 "name": "BaseBdev1", 00:15:06.703 "uuid": "906650e9-2bf3-4028-819e-919a2f2355cf", 00:15:06.703 "is_configured": true, 00:15:06.703 "data_offset": 0, 00:15:06.703 "data_size": 65536 00:15:06.703 }, 00:15:06.703 { 00:15:06.703 "name": null, 00:15:06.703 "uuid": "c3c7b7fb-ca86-447c-95da-bd34aeb32f6a", 00:15:06.703 "is_configured": false, 00:15:06.703 "data_offset": 0, 00:15:06.703 "data_size": 65536 00:15:06.703 }, 00:15:06.703 { 00:15:06.703 "name": null, 00:15:06.703 "uuid": "53635c37-4884-4a85-a0c4-a98e6d806480", 00:15:06.703 "is_configured": false, 00:15:06.703 "data_offset": 0, 00:15:06.703 "data_size": 65536 00:15:06.703 }, 00:15:06.703 { 00:15:06.703 "name": "BaseBdev4", 00:15:06.703 "uuid": "529ca66b-8a6a-4e6a-b4ef-2854fa967754", 00:15:06.703 "is_configured": true, 00:15:06.703 "data_offset": 0, 00:15:06.703 "data_size": 65536 00:15:06.703 } 00:15:06.703 ] 00:15:06.703 }' 00:15:06.703 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.703 06:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.269 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:07.269 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.269 06:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.269 06:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.269 06:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.269 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:07.269 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:07.269 06:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.269 06:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.269 [2024-12-06 06:41:25.854238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:07.269 06:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.269 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:07.269 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:07.269 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:07.269 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:07.269 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.269 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:07.269 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.269 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.269 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.269 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.269 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.269 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.269 06:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.269 06:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.269 06:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.269 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.269 "name": "Existed_Raid", 00:15:07.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.269 "strip_size_kb": 64, 00:15:07.269 "state": "configuring", 00:15:07.269 "raid_level": "raid0", 00:15:07.269 "superblock": false, 00:15:07.269 "num_base_bdevs": 4, 00:15:07.269 "num_base_bdevs_discovered": 3, 00:15:07.269 "num_base_bdevs_operational": 4, 00:15:07.269 "base_bdevs_list": [ 00:15:07.269 { 00:15:07.269 "name": "BaseBdev1", 00:15:07.269 "uuid": "906650e9-2bf3-4028-819e-919a2f2355cf", 00:15:07.269 "is_configured": true, 00:15:07.269 "data_offset": 0, 00:15:07.269 "data_size": 65536 00:15:07.269 }, 00:15:07.269 { 00:15:07.269 "name": null, 00:15:07.269 "uuid": "c3c7b7fb-ca86-447c-95da-bd34aeb32f6a", 00:15:07.269 "is_configured": false, 00:15:07.269 "data_offset": 0, 00:15:07.269 "data_size": 65536 00:15:07.269 }, 00:15:07.269 { 00:15:07.269 "name": "BaseBdev3", 00:15:07.269 "uuid": "53635c37-4884-4a85-a0c4-a98e6d806480", 00:15:07.269 "is_configured": true, 00:15:07.269 "data_offset": 0, 00:15:07.269 "data_size": 65536 00:15:07.269 }, 00:15:07.269 { 00:15:07.269 "name": "BaseBdev4", 00:15:07.269 "uuid": "529ca66b-8a6a-4e6a-b4ef-2854fa967754", 00:15:07.269 "is_configured": true, 00:15:07.269 "data_offset": 0, 00:15:07.269 "data_size": 65536 00:15:07.269 } 00:15:07.269 ] 00:15:07.269 }' 00:15:07.269 06:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.269 06:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.835 06:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:07.835 06:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.835 06:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.835 06:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.835 06:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.835 06:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:07.835 06:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:07.835 06:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.835 06:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.835 [2024-12-06 06:41:26.438414] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:08.095 06:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.095 06:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:08.095 06:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:08.095 06:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:08.095 06:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:08.095 06:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.095 06:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:08.095 06:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.095 06:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.095 06:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.095 06:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.095 06:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.095 06:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.095 06:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.095 06:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.095 06:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.095 06:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.095 "name": "Existed_Raid", 00:15:08.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.095 "strip_size_kb": 64, 00:15:08.095 "state": "configuring", 00:15:08.095 "raid_level": "raid0", 00:15:08.095 "superblock": false, 00:15:08.095 "num_base_bdevs": 4, 00:15:08.095 "num_base_bdevs_discovered": 2, 00:15:08.095 "num_base_bdevs_operational": 4, 00:15:08.095 "base_bdevs_list": [ 00:15:08.095 { 00:15:08.095 "name": null, 00:15:08.095 "uuid": "906650e9-2bf3-4028-819e-919a2f2355cf", 00:15:08.095 "is_configured": false, 00:15:08.095 "data_offset": 0, 00:15:08.095 "data_size": 65536 00:15:08.095 }, 00:15:08.095 { 00:15:08.095 "name": null, 00:15:08.095 "uuid": "c3c7b7fb-ca86-447c-95da-bd34aeb32f6a", 00:15:08.095 "is_configured": false, 00:15:08.095 "data_offset": 0, 00:15:08.095 "data_size": 65536 00:15:08.095 }, 00:15:08.095 { 00:15:08.095 "name": "BaseBdev3", 00:15:08.095 "uuid": "53635c37-4884-4a85-a0c4-a98e6d806480", 00:15:08.095 "is_configured": true, 00:15:08.095 "data_offset": 0, 00:15:08.095 "data_size": 65536 00:15:08.095 }, 00:15:08.095 { 00:15:08.095 "name": "BaseBdev4", 00:15:08.095 "uuid": "529ca66b-8a6a-4e6a-b4ef-2854fa967754", 00:15:08.095 "is_configured": true, 00:15:08.095 "data_offset": 0, 00:15:08.095 "data_size": 65536 00:15:08.095 } 00:15:08.095 ] 00:15:08.095 }' 00:15:08.095 06:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.095 06:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.664 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.664 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.664 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.664 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:08.664 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.664 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:08.664 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:08.664 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.664 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.664 [2024-12-06 06:41:27.107715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:08.664 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.664 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:08.664 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:08.664 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:08.664 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:08.664 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.664 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:08.664 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.664 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.664 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.664 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.664 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.664 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.664 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.664 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.664 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.664 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.664 "name": "Existed_Raid", 00:15:08.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.664 "strip_size_kb": 64, 00:15:08.664 "state": "configuring", 00:15:08.664 "raid_level": "raid0", 00:15:08.664 "superblock": false, 00:15:08.664 "num_base_bdevs": 4, 00:15:08.664 "num_base_bdevs_discovered": 3, 00:15:08.664 "num_base_bdevs_operational": 4, 00:15:08.664 "base_bdevs_list": [ 00:15:08.664 { 00:15:08.664 "name": null, 00:15:08.664 "uuid": "906650e9-2bf3-4028-819e-919a2f2355cf", 00:15:08.664 "is_configured": false, 00:15:08.664 "data_offset": 0, 00:15:08.664 "data_size": 65536 00:15:08.664 }, 00:15:08.664 { 00:15:08.664 "name": "BaseBdev2", 00:15:08.664 "uuid": "c3c7b7fb-ca86-447c-95da-bd34aeb32f6a", 00:15:08.664 "is_configured": true, 00:15:08.664 "data_offset": 0, 00:15:08.664 "data_size": 65536 00:15:08.664 }, 00:15:08.664 { 00:15:08.664 "name": "BaseBdev3", 00:15:08.664 "uuid": "53635c37-4884-4a85-a0c4-a98e6d806480", 00:15:08.664 "is_configured": true, 00:15:08.664 "data_offset": 0, 00:15:08.664 "data_size": 65536 00:15:08.664 }, 00:15:08.664 { 00:15:08.664 "name": "BaseBdev4", 00:15:08.664 "uuid": "529ca66b-8a6a-4e6a-b4ef-2854fa967754", 00:15:08.664 "is_configured": true, 00:15:08.664 "data_offset": 0, 00:15:08.664 "data_size": 65536 00:15:08.664 } 00:15:08.664 ] 00:15:08.664 }' 00:15:08.664 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.664 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.232 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.232 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.232 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.232 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:09.232 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.232 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:09.232 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.232 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.232 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.232 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:09.232 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.232 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 906650e9-2bf3-4028-819e-919a2f2355cf 00:15:09.232 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.232 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.232 [2024-12-06 06:41:27.731270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:09.232 [2024-12-06 06:41:27.731337] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:09.232 [2024-12-06 06:41:27.731351] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:09.232 [2024-12-06 06:41:27.731714] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:09.232 [2024-12-06 06:41:27.731910] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:09.232 [2024-12-06 06:41:27.731941] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:09.232 [2024-12-06 06:41:27.732230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.232 NewBaseBdev 00:15:09.232 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.232 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:09.232 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:09.232 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:09.232 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:09.232 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:09.232 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:09.232 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:09.232 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.232 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.232 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.232 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:09.232 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.232 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.232 [ 00:15:09.232 { 00:15:09.232 "name": "NewBaseBdev", 00:15:09.232 "aliases": [ 00:15:09.232 "906650e9-2bf3-4028-819e-919a2f2355cf" 00:15:09.232 ], 00:15:09.232 "product_name": "Malloc disk", 00:15:09.232 "block_size": 512, 00:15:09.232 "num_blocks": 65536, 00:15:09.232 "uuid": "906650e9-2bf3-4028-819e-919a2f2355cf", 00:15:09.232 "assigned_rate_limits": { 00:15:09.232 "rw_ios_per_sec": 0, 00:15:09.232 "rw_mbytes_per_sec": 0, 00:15:09.232 "r_mbytes_per_sec": 0, 00:15:09.232 "w_mbytes_per_sec": 0 00:15:09.232 }, 00:15:09.232 "claimed": true, 00:15:09.232 "claim_type": "exclusive_write", 00:15:09.232 "zoned": false, 00:15:09.232 "supported_io_types": { 00:15:09.232 "read": true, 00:15:09.232 "write": true, 00:15:09.232 "unmap": true, 00:15:09.232 "flush": true, 00:15:09.232 "reset": true, 00:15:09.232 "nvme_admin": false, 00:15:09.232 "nvme_io": false, 00:15:09.232 "nvme_io_md": false, 00:15:09.232 "write_zeroes": true, 00:15:09.232 "zcopy": true, 00:15:09.232 "get_zone_info": false, 00:15:09.232 "zone_management": false, 00:15:09.232 "zone_append": false, 00:15:09.232 "compare": false, 00:15:09.232 "compare_and_write": false, 00:15:09.232 "abort": true, 00:15:09.232 "seek_hole": false, 00:15:09.232 "seek_data": false, 00:15:09.232 "copy": true, 00:15:09.232 "nvme_iov_md": false 00:15:09.232 }, 00:15:09.232 "memory_domains": [ 00:15:09.232 { 00:15:09.232 "dma_device_id": "system", 00:15:09.232 "dma_device_type": 1 00:15:09.232 }, 00:15:09.232 { 00:15:09.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.232 "dma_device_type": 2 00:15:09.232 } 00:15:09.232 ], 00:15:09.232 "driver_specific": {} 00:15:09.232 } 00:15:09.232 ] 00:15:09.232 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.232 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:09.233 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:09.233 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:09.233 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.233 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:09.233 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.233 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:09.233 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.233 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.233 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.233 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.233 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.233 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.233 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.233 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.233 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.233 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.233 "name": "Existed_Raid", 00:15:09.233 "uuid": "1e4874b9-0298-48fa-9d9a-41d8b8d5f9d9", 00:15:09.233 "strip_size_kb": 64, 00:15:09.233 "state": "online", 00:15:09.233 "raid_level": "raid0", 00:15:09.233 "superblock": false, 00:15:09.233 "num_base_bdevs": 4, 00:15:09.233 "num_base_bdevs_discovered": 4, 00:15:09.233 "num_base_bdevs_operational": 4, 00:15:09.233 "base_bdevs_list": [ 00:15:09.233 { 00:15:09.233 "name": "NewBaseBdev", 00:15:09.233 "uuid": "906650e9-2bf3-4028-819e-919a2f2355cf", 00:15:09.233 "is_configured": true, 00:15:09.233 "data_offset": 0, 00:15:09.233 "data_size": 65536 00:15:09.233 }, 00:15:09.233 { 00:15:09.233 "name": "BaseBdev2", 00:15:09.233 "uuid": "c3c7b7fb-ca86-447c-95da-bd34aeb32f6a", 00:15:09.233 "is_configured": true, 00:15:09.233 "data_offset": 0, 00:15:09.233 "data_size": 65536 00:15:09.233 }, 00:15:09.233 { 00:15:09.233 "name": "BaseBdev3", 00:15:09.233 "uuid": "53635c37-4884-4a85-a0c4-a98e6d806480", 00:15:09.233 "is_configured": true, 00:15:09.233 "data_offset": 0, 00:15:09.233 "data_size": 65536 00:15:09.233 }, 00:15:09.233 { 00:15:09.233 "name": "BaseBdev4", 00:15:09.233 "uuid": "529ca66b-8a6a-4e6a-b4ef-2854fa967754", 00:15:09.233 "is_configured": true, 00:15:09.233 "data_offset": 0, 00:15:09.233 "data_size": 65536 00:15:09.233 } 00:15:09.233 ] 00:15:09.233 }' 00:15:09.233 06:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.233 06:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.800 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:09.800 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:09.800 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:09.800 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:09.800 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:09.800 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:09.800 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:09.800 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:09.800 06:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.800 06:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.800 [2024-12-06 06:41:28.183921] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:09.800 06:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.800 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:09.800 "name": "Existed_Raid", 00:15:09.800 "aliases": [ 00:15:09.800 "1e4874b9-0298-48fa-9d9a-41d8b8d5f9d9" 00:15:09.800 ], 00:15:09.800 "product_name": "Raid Volume", 00:15:09.800 "block_size": 512, 00:15:09.800 "num_blocks": 262144, 00:15:09.800 "uuid": "1e4874b9-0298-48fa-9d9a-41d8b8d5f9d9", 00:15:09.800 "assigned_rate_limits": { 00:15:09.800 "rw_ios_per_sec": 0, 00:15:09.800 "rw_mbytes_per_sec": 0, 00:15:09.800 "r_mbytes_per_sec": 0, 00:15:09.800 "w_mbytes_per_sec": 0 00:15:09.800 }, 00:15:09.800 "claimed": false, 00:15:09.800 "zoned": false, 00:15:09.800 "supported_io_types": { 00:15:09.800 "read": true, 00:15:09.800 "write": true, 00:15:09.800 "unmap": true, 00:15:09.800 "flush": true, 00:15:09.800 "reset": true, 00:15:09.800 "nvme_admin": false, 00:15:09.800 "nvme_io": false, 00:15:09.800 "nvme_io_md": false, 00:15:09.800 "write_zeroes": true, 00:15:09.800 "zcopy": false, 00:15:09.800 "get_zone_info": false, 00:15:09.800 "zone_management": false, 00:15:09.800 "zone_append": false, 00:15:09.800 "compare": false, 00:15:09.800 "compare_and_write": false, 00:15:09.800 "abort": false, 00:15:09.800 "seek_hole": false, 00:15:09.800 "seek_data": false, 00:15:09.800 "copy": false, 00:15:09.800 "nvme_iov_md": false 00:15:09.800 }, 00:15:09.800 "memory_domains": [ 00:15:09.800 { 00:15:09.800 "dma_device_id": "system", 00:15:09.800 "dma_device_type": 1 00:15:09.800 }, 00:15:09.800 { 00:15:09.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.800 "dma_device_type": 2 00:15:09.800 }, 00:15:09.800 { 00:15:09.800 "dma_device_id": "system", 00:15:09.800 "dma_device_type": 1 00:15:09.800 }, 00:15:09.800 { 00:15:09.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.800 "dma_device_type": 2 00:15:09.800 }, 00:15:09.800 { 00:15:09.800 "dma_device_id": "system", 00:15:09.800 "dma_device_type": 1 00:15:09.800 }, 00:15:09.800 { 00:15:09.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.800 "dma_device_type": 2 00:15:09.800 }, 00:15:09.800 { 00:15:09.800 "dma_device_id": "system", 00:15:09.800 "dma_device_type": 1 00:15:09.800 }, 00:15:09.800 { 00:15:09.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.800 "dma_device_type": 2 00:15:09.800 } 00:15:09.800 ], 00:15:09.800 "driver_specific": { 00:15:09.800 "raid": { 00:15:09.800 "uuid": "1e4874b9-0298-48fa-9d9a-41d8b8d5f9d9", 00:15:09.800 "strip_size_kb": 64, 00:15:09.800 "state": "online", 00:15:09.800 "raid_level": "raid0", 00:15:09.800 "superblock": false, 00:15:09.800 "num_base_bdevs": 4, 00:15:09.800 "num_base_bdevs_discovered": 4, 00:15:09.800 "num_base_bdevs_operational": 4, 00:15:09.800 "base_bdevs_list": [ 00:15:09.800 { 00:15:09.800 "name": "NewBaseBdev", 00:15:09.800 "uuid": "906650e9-2bf3-4028-819e-919a2f2355cf", 00:15:09.800 "is_configured": true, 00:15:09.800 "data_offset": 0, 00:15:09.800 "data_size": 65536 00:15:09.800 }, 00:15:09.800 { 00:15:09.800 "name": "BaseBdev2", 00:15:09.800 "uuid": "c3c7b7fb-ca86-447c-95da-bd34aeb32f6a", 00:15:09.800 "is_configured": true, 00:15:09.800 "data_offset": 0, 00:15:09.800 "data_size": 65536 00:15:09.800 }, 00:15:09.800 { 00:15:09.800 "name": "BaseBdev3", 00:15:09.800 "uuid": "53635c37-4884-4a85-a0c4-a98e6d806480", 00:15:09.800 "is_configured": true, 00:15:09.800 "data_offset": 0, 00:15:09.800 "data_size": 65536 00:15:09.800 }, 00:15:09.800 { 00:15:09.800 "name": "BaseBdev4", 00:15:09.800 "uuid": "529ca66b-8a6a-4e6a-b4ef-2854fa967754", 00:15:09.800 "is_configured": true, 00:15:09.800 "data_offset": 0, 00:15:09.800 "data_size": 65536 00:15:09.800 } 00:15:09.800 ] 00:15:09.800 } 00:15:09.800 } 00:15:09.800 }' 00:15:09.800 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:09.800 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:09.800 BaseBdev2 00:15:09.800 BaseBdev3 00:15:09.800 BaseBdev4' 00:15:09.800 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.801 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:09.801 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.801 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:09.801 06:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.801 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.801 06:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.801 06:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.801 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.801 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.801 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.801 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:09.801 06:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.801 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.801 06:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.801 06:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.801 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.801 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.801 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.801 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:09.801 06:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.801 06:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.801 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.801 06:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.060 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:10.060 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:10.060 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:10.060 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.060 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:10.060 06:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.060 06:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.060 06:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.060 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:10.060 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:10.060 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:10.060 06:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.060 06:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.060 [2024-12-06 06:41:28.503954] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:10.060 [2024-12-06 06:41:28.503993] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:10.060 [2024-12-06 06:41:28.504091] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:10.060 [2024-12-06 06:41:28.504182] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:10.060 [2024-12-06 06:41:28.504204] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:10.060 06:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.060 06:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69650 00:15:10.060 06:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69650 ']' 00:15:10.060 06:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69650 00:15:10.060 06:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:10.060 06:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:10.060 06:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69650 00:15:10.060 killing process with pid 69650 00:15:10.060 06:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:10.060 06:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:10.060 06:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69650' 00:15:10.060 06:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69650 00:15:10.060 [2024-12-06 06:41:28.543394] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:10.060 06:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69650 00:15:10.318 [2024-12-06 06:41:28.900219] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:11.695 06:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:11.695 00:15:11.695 real 0m12.560s 00:15:11.695 user 0m20.833s 00:15:11.695 sys 0m1.706s 00:15:11.695 06:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:11.695 06:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.695 ************************************ 00:15:11.695 END TEST raid_state_function_test 00:15:11.695 ************************************ 00:15:11.695 06:41:29 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:15:11.695 06:41:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:11.695 06:41:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:11.695 06:41:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:11.695 ************************************ 00:15:11.695 START TEST raid_state_function_test_sb 00:15:11.695 ************************************ 00:15:11.695 06:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:15:11.695 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:15:11.695 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:11.695 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:11.695 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:11.695 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:11.695 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:11.695 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:11.695 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70338 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:11.696 Process raid pid: 70338 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70338' 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70338 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70338 ']' 00:15:11.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:11.696 06:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.696 [2024-12-06 06:41:30.100959] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:15:11.696 [2024-12-06 06:41:30.101114] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.696 [2024-12-06 06:41:30.312274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.953 [2024-12-06 06:41:30.448304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.211 [2024-12-06 06:41:30.655690] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:12.211 [2024-12-06 06:41:30.655750] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:12.469 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:12.469 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:12.469 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:12.469 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.469 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.469 [2024-12-06 06:41:31.100344] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:12.469 [2024-12-06 06:41:31.100416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:12.469 [2024-12-06 06:41:31.100434] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:12.469 [2024-12-06 06:41:31.100452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:12.469 [2024-12-06 06:41:31.100462] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:12.469 [2024-12-06 06:41:31.100476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:12.469 [2024-12-06 06:41:31.100486] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:12.469 [2024-12-06 06:41:31.100501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:12.469 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.469 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:12.469 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.469 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.469 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:12.469 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.469 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:12.469 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.469 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.469 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.469 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.469 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.469 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.469 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.469 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.727 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.727 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.727 "name": "Existed_Raid", 00:15:12.727 "uuid": "2b66b113-4ff6-4bfb-b3f5-d2a3750d967a", 00:15:12.727 "strip_size_kb": 64, 00:15:12.727 "state": "configuring", 00:15:12.727 "raid_level": "raid0", 00:15:12.727 "superblock": true, 00:15:12.727 "num_base_bdevs": 4, 00:15:12.727 "num_base_bdevs_discovered": 0, 00:15:12.727 "num_base_bdevs_operational": 4, 00:15:12.727 "base_bdevs_list": [ 00:15:12.727 { 00:15:12.727 "name": "BaseBdev1", 00:15:12.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.727 "is_configured": false, 00:15:12.727 "data_offset": 0, 00:15:12.727 "data_size": 0 00:15:12.727 }, 00:15:12.727 { 00:15:12.727 "name": "BaseBdev2", 00:15:12.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.727 "is_configured": false, 00:15:12.727 "data_offset": 0, 00:15:12.727 "data_size": 0 00:15:12.727 }, 00:15:12.727 { 00:15:12.727 "name": "BaseBdev3", 00:15:12.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.727 "is_configured": false, 00:15:12.727 "data_offset": 0, 00:15:12.727 "data_size": 0 00:15:12.727 }, 00:15:12.727 { 00:15:12.727 "name": "BaseBdev4", 00:15:12.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.727 "is_configured": false, 00:15:12.727 "data_offset": 0, 00:15:12.727 "data_size": 0 00:15:12.727 } 00:15:12.727 ] 00:15:12.727 }' 00:15:12.727 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.727 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.985 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:12.985 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.985 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.985 [2024-12-06 06:41:31.600423] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:12.985 [2024-12-06 06:41:31.600473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:12.985 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.985 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:12.985 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.985 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.985 [2024-12-06 06:41:31.608421] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:12.985 [2024-12-06 06:41:31.608476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:12.985 [2024-12-06 06:41:31.608493] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:12.985 [2024-12-06 06:41:31.608509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:12.985 [2024-12-06 06:41:31.608519] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:12.985 [2024-12-06 06:41:31.608554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:12.985 [2024-12-06 06:41:31.608565] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:12.985 [2024-12-06 06:41:31.608579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:12.985 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.985 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:12.985 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.985 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.243 [2024-12-06 06:41:31.653840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:13.243 BaseBdev1 00:15:13.243 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.243 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:13.243 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:13.243 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:13.243 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:13.243 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:13.243 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:13.243 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:13.243 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.243 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.243 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.243 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:13.243 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.243 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.243 [ 00:15:13.243 { 00:15:13.243 "name": "BaseBdev1", 00:15:13.243 "aliases": [ 00:15:13.243 "9ca48d9b-ddd3-4e93-9aa2-cc80486e3e79" 00:15:13.243 ], 00:15:13.243 "product_name": "Malloc disk", 00:15:13.243 "block_size": 512, 00:15:13.243 "num_blocks": 65536, 00:15:13.243 "uuid": "9ca48d9b-ddd3-4e93-9aa2-cc80486e3e79", 00:15:13.243 "assigned_rate_limits": { 00:15:13.243 "rw_ios_per_sec": 0, 00:15:13.243 "rw_mbytes_per_sec": 0, 00:15:13.243 "r_mbytes_per_sec": 0, 00:15:13.243 "w_mbytes_per_sec": 0 00:15:13.243 }, 00:15:13.243 "claimed": true, 00:15:13.243 "claim_type": "exclusive_write", 00:15:13.243 "zoned": false, 00:15:13.243 "supported_io_types": { 00:15:13.243 "read": true, 00:15:13.243 "write": true, 00:15:13.243 "unmap": true, 00:15:13.243 "flush": true, 00:15:13.243 "reset": true, 00:15:13.243 "nvme_admin": false, 00:15:13.243 "nvme_io": false, 00:15:13.243 "nvme_io_md": false, 00:15:13.243 "write_zeroes": true, 00:15:13.243 "zcopy": true, 00:15:13.243 "get_zone_info": false, 00:15:13.243 "zone_management": false, 00:15:13.243 "zone_append": false, 00:15:13.243 "compare": false, 00:15:13.243 "compare_and_write": false, 00:15:13.243 "abort": true, 00:15:13.243 "seek_hole": false, 00:15:13.243 "seek_data": false, 00:15:13.243 "copy": true, 00:15:13.243 "nvme_iov_md": false 00:15:13.243 }, 00:15:13.243 "memory_domains": [ 00:15:13.243 { 00:15:13.243 "dma_device_id": "system", 00:15:13.243 "dma_device_type": 1 00:15:13.243 }, 00:15:13.243 { 00:15:13.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.243 "dma_device_type": 2 00:15:13.243 } 00:15:13.243 ], 00:15:13.243 "driver_specific": {} 00:15:13.243 } 00:15:13.243 ] 00:15:13.244 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.244 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:13.244 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:13.244 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.244 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:13.244 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:13.244 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.244 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:13.244 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.244 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.244 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.244 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.244 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.244 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.244 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.244 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.244 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.244 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.244 "name": "Existed_Raid", 00:15:13.244 "uuid": "e9eefc37-042b-4781-9cc0-036e6d2819c2", 00:15:13.244 "strip_size_kb": 64, 00:15:13.244 "state": "configuring", 00:15:13.244 "raid_level": "raid0", 00:15:13.244 "superblock": true, 00:15:13.244 "num_base_bdevs": 4, 00:15:13.244 "num_base_bdevs_discovered": 1, 00:15:13.244 "num_base_bdevs_operational": 4, 00:15:13.244 "base_bdevs_list": [ 00:15:13.244 { 00:15:13.244 "name": "BaseBdev1", 00:15:13.244 "uuid": "9ca48d9b-ddd3-4e93-9aa2-cc80486e3e79", 00:15:13.244 "is_configured": true, 00:15:13.244 "data_offset": 2048, 00:15:13.244 "data_size": 63488 00:15:13.244 }, 00:15:13.244 { 00:15:13.244 "name": "BaseBdev2", 00:15:13.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.244 "is_configured": false, 00:15:13.244 "data_offset": 0, 00:15:13.244 "data_size": 0 00:15:13.244 }, 00:15:13.244 { 00:15:13.244 "name": "BaseBdev3", 00:15:13.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.244 "is_configured": false, 00:15:13.244 "data_offset": 0, 00:15:13.244 "data_size": 0 00:15:13.244 }, 00:15:13.244 { 00:15:13.244 "name": "BaseBdev4", 00:15:13.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.244 "is_configured": false, 00:15:13.244 "data_offset": 0, 00:15:13.244 "data_size": 0 00:15:13.244 } 00:15:13.244 ] 00:15:13.244 }' 00:15:13.244 06:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.244 06:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.822 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:13.822 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.822 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.822 [2024-12-06 06:41:32.214078] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:13.822 [2024-12-06 06:41:32.214150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:13.822 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.822 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:13.822 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.822 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.822 [2024-12-06 06:41:32.222125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:13.822 [2024-12-06 06:41:32.224675] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:13.822 [2024-12-06 06:41:32.224731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:13.822 [2024-12-06 06:41:32.224749] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:13.822 [2024-12-06 06:41:32.224767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:13.822 [2024-12-06 06:41:32.224778] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:13.822 [2024-12-06 06:41:32.224792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:13.822 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.822 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:13.822 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:13.822 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:13.822 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.822 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:13.822 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:13.822 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.822 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:13.822 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.822 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.822 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.822 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.822 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.822 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.822 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.822 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.822 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.822 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.822 "name": "Existed_Raid", 00:15:13.822 "uuid": "3bd88129-1e51-4520-bfd7-bb8146322aaa", 00:15:13.822 "strip_size_kb": 64, 00:15:13.822 "state": "configuring", 00:15:13.822 "raid_level": "raid0", 00:15:13.822 "superblock": true, 00:15:13.822 "num_base_bdevs": 4, 00:15:13.822 "num_base_bdevs_discovered": 1, 00:15:13.822 "num_base_bdevs_operational": 4, 00:15:13.822 "base_bdevs_list": [ 00:15:13.822 { 00:15:13.822 "name": "BaseBdev1", 00:15:13.822 "uuid": "9ca48d9b-ddd3-4e93-9aa2-cc80486e3e79", 00:15:13.822 "is_configured": true, 00:15:13.822 "data_offset": 2048, 00:15:13.822 "data_size": 63488 00:15:13.822 }, 00:15:13.822 { 00:15:13.822 "name": "BaseBdev2", 00:15:13.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.822 "is_configured": false, 00:15:13.822 "data_offset": 0, 00:15:13.822 "data_size": 0 00:15:13.822 }, 00:15:13.822 { 00:15:13.822 "name": "BaseBdev3", 00:15:13.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.822 "is_configured": false, 00:15:13.822 "data_offset": 0, 00:15:13.822 "data_size": 0 00:15:13.822 }, 00:15:13.822 { 00:15:13.822 "name": "BaseBdev4", 00:15:13.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.823 "is_configured": false, 00:15:13.823 "data_offset": 0, 00:15:13.823 "data_size": 0 00:15:13.823 } 00:15:13.823 ] 00:15:13.823 }' 00:15:13.823 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.823 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.388 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:14.388 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.388 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.389 [2024-12-06 06:41:32.780995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:14.389 BaseBdev2 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.389 [ 00:15:14.389 { 00:15:14.389 "name": "BaseBdev2", 00:15:14.389 "aliases": [ 00:15:14.389 "7f7d8e0d-f6d8-4b6a-8f87-cac8bd1235e3" 00:15:14.389 ], 00:15:14.389 "product_name": "Malloc disk", 00:15:14.389 "block_size": 512, 00:15:14.389 "num_blocks": 65536, 00:15:14.389 "uuid": "7f7d8e0d-f6d8-4b6a-8f87-cac8bd1235e3", 00:15:14.389 "assigned_rate_limits": { 00:15:14.389 "rw_ios_per_sec": 0, 00:15:14.389 "rw_mbytes_per_sec": 0, 00:15:14.389 "r_mbytes_per_sec": 0, 00:15:14.389 "w_mbytes_per_sec": 0 00:15:14.389 }, 00:15:14.389 "claimed": true, 00:15:14.389 "claim_type": "exclusive_write", 00:15:14.389 "zoned": false, 00:15:14.389 "supported_io_types": { 00:15:14.389 "read": true, 00:15:14.389 "write": true, 00:15:14.389 "unmap": true, 00:15:14.389 "flush": true, 00:15:14.389 "reset": true, 00:15:14.389 "nvme_admin": false, 00:15:14.389 "nvme_io": false, 00:15:14.389 "nvme_io_md": false, 00:15:14.389 "write_zeroes": true, 00:15:14.389 "zcopy": true, 00:15:14.389 "get_zone_info": false, 00:15:14.389 "zone_management": false, 00:15:14.389 "zone_append": false, 00:15:14.389 "compare": false, 00:15:14.389 "compare_and_write": false, 00:15:14.389 "abort": true, 00:15:14.389 "seek_hole": false, 00:15:14.389 "seek_data": false, 00:15:14.389 "copy": true, 00:15:14.389 "nvme_iov_md": false 00:15:14.389 }, 00:15:14.389 "memory_domains": [ 00:15:14.389 { 00:15:14.389 "dma_device_id": "system", 00:15:14.389 "dma_device_type": 1 00:15:14.389 }, 00:15:14.389 { 00:15:14.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.389 "dma_device_type": 2 00:15:14.389 } 00:15:14.389 ], 00:15:14.389 "driver_specific": {} 00:15:14.389 } 00:15:14.389 ] 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.389 "name": "Existed_Raid", 00:15:14.389 "uuid": "3bd88129-1e51-4520-bfd7-bb8146322aaa", 00:15:14.389 "strip_size_kb": 64, 00:15:14.389 "state": "configuring", 00:15:14.389 "raid_level": "raid0", 00:15:14.389 "superblock": true, 00:15:14.389 "num_base_bdevs": 4, 00:15:14.389 "num_base_bdevs_discovered": 2, 00:15:14.389 "num_base_bdevs_operational": 4, 00:15:14.389 "base_bdevs_list": [ 00:15:14.389 { 00:15:14.389 "name": "BaseBdev1", 00:15:14.389 "uuid": "9ca48d9b-ddd3-4e93-9aa2-cc80486e3e79", 00:15:14.389 "is_configured": true, 00:15:14.389 "data_offset": 2048, 00:15:14.389 "data_size": 63488 00:15:14.389 }, 00:15:14.389 { 00:15:14.389 "name": "BaseBdev2", 00:15:14.389 "uuid": "7f7d8e0d-f6d8-4b6a-8f87-cac8bd1235e3", 00:15:14.389 "is_configured": true, 00:15:14.389 "data_offset": 2048, 00:15:14.389 "data_size": 63488 00:15:14.389 }, 00:15:14.389 { 00:15:14.389 "name": "BaseBdev3", 00:15:14.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.389 "is_configured": false, 00:15:14.389 "data_offset": 0, 00:15:14.389 "data_size": 0 00:15:14.389 }, 00:15:14.389 { 00:15:14.389 "name": "BaseBdev4", 00:15:14.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.389 "is_configured": false, 00:15:14.389 "data_offset": 0, 00:15:14.389 "data_size": 0 00:15:14.389 } 00:15:14.389 ] 00:15:14.389 }' 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.389 06:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.954 [2024-12-06 06:41:33.386327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:14.954 BaseBdev3 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.954 [ 00:15:14.954 { 00:15:14.954 "name": "BaseBdev3", 00:15:14.954 "aliases": [ 00:15:14.954 "12d2fa2a-982a-49c9-bf57-e5a174be09ca" 00:15:14.954 ], 00:15:14.954 "product_name": "Malloc disk", 00:15:14.954 "block_size": 512, 00:15:14.954 "num_blocks": 65536, 00:15:14.954 "uuid": "12d2fa2a-982a-49c9-bf57-e5a174be09ca", 00:15:14.954 "assigned_rate_limits": { 00:15:14.954 "rw_ios_per_sec": 0, 00:15:14.954 "rw_mbytes_per_sec": 0, 00:15:14.954 "r_mbytes_per_sec": 0, 00:15:14.954 "w_mbytes_per_sec": 0 00:15:14.954 }, 00:15:14.954 "claimed": true, 00:15:14.954 "claim_type": "exclusive_write", 00:15:14.954 "zoned": false, 00:15:14.954 "supported_io_types": { 00:15:14.954 "read": true, 00:15:14.954 "write": true, 00:15:14.954 "unmap": true, 00:15:14.954 "flush": true, 00:15:14.954 "reset": true, 00:15:14.954 "nvme_admin": false, 00:15:14.954 "nvme_io": false, 00:15:14.954 "nvme_io_md": false, 00:15:14.954 "write_zeroes": true, 00:15:14.954 "zcopy": true, 00:15:14.954 "get_zone_info": false, 00:15:14.954 "zone_management": false, 00:15:14.954 "zone_append": false, 00:15:14.954 "compare": false, 00:15:14.954 "compare_and_write": false, 00:15:14.954 "abort": true, 00:15:14.954 "seek_hole": false, 00:15:14.954 "seek_data": false, 00:15:14.954 "copy": true, 00:15:14.954 "nvme_iov_md": false 00:15:14.954 }, 00:15:14.954 "memory_domains": [ 00:15:14.954 { 00:15:14.954 "dma_device_id": "system", 00:15:14.954 "dma_device_type": 1 00:15:14.954 }, 00:15:14.954 { 00:15:14.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.954 "dma_device_type": 2 00:15:14.954 } 00:15:14.954 ], 00:15:14.954 "driver_specific": {} 00:15:14.954 } 00:15:14.954 ] 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.954 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.954 "name": "Existed_Raid", 00:15:14.954 "uuid": "3bd88129-1e51-4520-bfd7-bb8146322aaa", 00:15:14.954 "strip_size_kb": 64, 00:15:14.954 "state": "configuring", 00:15:14.954 "raid_level": "raid0", 00:15:14.954 "superblock": true, 00:15:14.954 "num_base_bdevs": 4, 00:15:14.954 "num_base_bdevs_discovered": 3, 00:15:14.954 "num_base_bdevs_operational": 4, 00:15:14.954 "base_bdevs_list": [ 00:15:14.954 { 00:15:14.954 "name": "BaseBdev1", 00:15:14.954 "uuid": "9ca48d9b-ddd3-4e93-9aa2-cc80486e3e79", 00:15:14.954 "is_configured": true, 00:15:14.954 "data_offset": 2048, 00:15:14.954 "data_size": 63488 00:15:14.954 }, 00:15:14.954 { 00:15:14.954 "name": "BaseBdev2", 00:15:14.954 "uuid": "7f7d8e0d-f6d8-4b6a-8f87-cac8bd1235e3", 00:15:14.954 "is_configured": true, 00:15:14.954 "data_offset": 2048, 00:15:14.954 "data_size": 63488 00:15:14.954 }, 00:15:14.954 { 00:15:14.954 "name": "BaseBdev3", 00:15:14.954 "uuid": "12d2fa2a-982a-49c9-bf57-e5a174be09ca", 00:15:14.954 "is_configured": true, 00:15:14.954 "data_offset": 2048, 00:15:14.954 "data_size": 63488 00:15:14.954 }, 00:15:14.954 { 00:15:14.954 "name": "BaseBdev4", 00:15:14.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.954 "is_configured": false, 00:15:14.954 "data_offset": 0, 00:15:14.954 "data_size": 0 00:15:14.954 } 00:15:14.954 ] 00:15:14.954 }' 00:15:14.955 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.955 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.520 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:15.520 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.520 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.520 [2024-12-06 06:41:33.933411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:15.520 [2024-12-06 06:41:33.933833] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:15.520 [2024-12-06 06:41:33.933854] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:15.520 BaseBdev4 00:15:15.520 [2024-12-06 06:41:33.934203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:15.520 [2024-12-06 06:41:33.934398] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:15.520 [2024-12-06 06:41:33.934419] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:15.520 [2024-12-06 06:41:33.934622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.520 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.520 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:15.520 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:15.520 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:15.520 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:15.520 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:15.520 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:15.520 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:15.520 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.520 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.520 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.520 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:15.520 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.520 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.520 [ 00:15:15.520 { 00:15:15.520 "name": "BaseBdev4", 00:15:15.520 "aliases": [ 00:15:15.520 "24055ab6-b6f9-4cb5-894d-2e86f81869c3" 00:15:15.520 ], 00:15:15.520 "product_name": "Malloc disk", 00:15:15.520 "block_size": 512, 00:15:15.520 "num_blocks": 65536, 00:15:15.520 "uuid": "24055ab6-b6f9-4cb5-894d-2e86f81869c3", 00:15:15.520 "assigned_rate_limits": { 00:15:15.520 "rw_ios_per_sec": 0, 00:15:15.520 "rw_mbytes_per_sec": 0, 00:15:15.520 "r_mbytes_per_sec": 0, 00:15:15.520 "w_mbytes_per_sec": 0 00:15:15.520 }, 00:15:15.520 "claimed": true, 00:15:15.520 "claim_type": "exclusive_write", 00:15:15.520 "zoned": false, 00:15:15.520 "supported_io_types": { 00:15:15.520 "read": true, 00:15:15.520 "write": true, 00:15:15.520 "unmap": true, 00:15:15.520 "flush": true, 00:15:15.520 "reset": true, 00:15:15.520 "nvme_admin": false, 00:15:15.520 "nvme_io": false, 00:15:15.520 "nvme_io_md": false, 00:15:15.520 "write_zeroes": true, 00:15:15.520 "zcopy": true, 00:15:15.520 "get_zone_info": false, 00:15:15.520 "zone_management": false, 00:15:15.520 "zone_append": false, 00:15:15.520 "compare": false, 00:15:15.520 "compare_and_write": false, 00:15:15.520 "abort": true, 00:15:15.520 "seek_hole": false, 00:15:15.520 "seek_data": false, 00:15:15.520 "copy": true, 00:15:15.520 "nvme_iov_md": false 00:15:15.520 }, 00:15:15.520 "memory_domains": [ 00:15:15.520 { 00:15:15.520 "dma_device_id": "system", 00:15:15.520 "dma_device_type": 1 00:15:15.520 }, 00:15:15.520 { 00:15:15.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.520 "dma_device_type": 2 00:15:15.520 } 00:15:15.520 ], 00:15:15.520 "driver_specific": {} 00:15:15.520 } 00:15:15.520 ] 00:15:15.520 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.520 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:15.520 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:15.520 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:15.520 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:15.520 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.520 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.520 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:15.520 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.520 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:15.520 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.520 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.521 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.521 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.521 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.521 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.521 06:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.521 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.521 06:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.521 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.521 "name": "Existed_Raid", 00:15:15.521 "uuid": "3bd88129-1e51-4520-bfd7-bb8146322aaa", 00:15:15.521 "strip_size_kb": 64, 00:15:15.521 "state": "online", 00:15:15.521 "raid_level": "raid0", 00:15:15.521 "superblock": true, 00:15:15.521 "num_base_bdevs": 4, 00:15:15.521 "num_base_bdevs_discovered": 4, 00:15:15.521 "num_base_bdevs_operational": 4, 00:15:15.521 "base_bdevs_list": [ 00:15:15.521 { 00:15:15.521 "name": "BaseBdev1", 00:15:15.521 "uuid": "9ca48d9b-ddd3-4e93-9aa2-cc80486e3e79", 00:15:15.521 "is_configured": true, 00:15:15.521 "data_offset": 2048, 00:15:15.521 "data_size": 63488 00:15:15.521 }, 00:15:15.521 { 00:15:15.521 "name": "BaseBdev2", 00:15:15.521 "uuid": "7f7d8e0d-f6d8-4b6a-8f87-cac8bd1235e3", 00:15:15.521 "is_configured": true, 00:15:15.521 "data_offset": 2048, 00:15:15.521 "data_size": 63488 00:15:15.521 }, 00:15:15.521 { 00:15:15.521 "name": "BaseBdev3", 00:15:15.521 "uuid": "12d2fa2a-982a-49c9-bf57-e5a174be09ca", 00:15:15.521 "is_configured": true, 00:15:15.521 "data_offset": 2048, 00:15:15.521 "data_size": 63488 00:15:15.521 }, 00:15:15.521 { 00:15:15.521 "name": "BaseBdev4", 00:15:15.521 "uuid": "24055ab6-b6f9-4cb5-894d-2e86f81869c3", 00:15:15.521 "is_configured": true, 00:15:15.521 "data_offset": 2048, 00:15:15.521 "data_size": 63488 00:15:15.521 } 00:15:15.521 ] 00:15:15.521 }' 00:15:15.521 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.521 06:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.086 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:16.086 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:16.086 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:16.086 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:16.086 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:16.086 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:16.086 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:16.086 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:16.086 06:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.086 06:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.086 [2024-12-06 06:41:34.474102] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:16.086 06:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.086 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:16.086 "name": "Existed_Raid", 00:15:16.086 "aliases": [ 00:15:16.086 "3bd88129-1e51-4520-bfd7-bb8146322aaa" 00:15:16.086 ], 00:15:16.086 "product_name": "Raid Volume", 00:15:16.086 "block_size": 512, 00:15:16.086 "num_blocks": 253952, 00:15:16.086 "uuid": "3bd88129-1e51-4520-bfd7-bb8146322aaa", 00:15:16.086 "assigned_rate_limits": { 00:15:16.086 "rw_ios_per_sec": 0, 00:15:16.086 "rw_mbytes_per_sec": 0, 00:15:16.086 "r_mbytes_per_sec": 0, 00:15:16.086 "w_mbytes_per_sec": 0 00:15:16.086 }, 00:15:16.086 "claimed": false, 00:15:16.086 "zoned": false, 00:15:16.086 "supported_io_types": { 00:15:16.086 "read": true, 00:15:16.086 "write": true, 00:15:16.086 "unmap": true, 00:15:16.086 "flush": true, 00:15:16.086 "reset": true, 00:15:16.086 "nvme_admin": false, 00:15:16.086 "nvme_io": false, 00:15:16.086 "nvme_io_md": false, 00:15:16.086 "write_zeroes": true, 00:15:16.087 "zcopy": false, 00:15:16.087 "get_zone_info": false, 00:15:16.087 "zone_management": false, 00:15:16.087 "zone_append": false, 00:15:16.087 "compare": false, 00:15:16.087 "compare_and_write": false, 00:15:16.087 "abort": false, 00:15:16.087 "seek_hole": false, 00:15:16.087 "seek_data": false, 00:15:16.087 "copy": false, 00:15:16.087 "nvme_iov_md": false 00:15:16.087 }, 00:15:16.087 "memory_domains": [ 00:15:16.087 { 00:15:16.087 "dma_device_id": "system", 00:15:16.087 "dma_device_type": 1 00:15:16.087 }, 00:15:16.087 { 00:15:16.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.087 "dma_device_type": 2 00:15:16.087 }, 00:15:16.087 { 00:15:16.087 "dma_device_id": "system", 00:15:16.087 "dma_device_type": 1 00:15:16.087 }, 00:15:16.087 { 00:15:16.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.087 "dma_device_type": 2 00:15:16.087 }, 00:15:16.087 { 00:15:16.087 "dma_device_id": "system", 00:15:16.087 "dma_device_type": 1 00:15:16.087 }, 00:15:16.087 { 00:15:16.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.087 "dma_device_type": 2 00:15:16.087 }, 00:15:16.087 { 00:15:16.087 "dma_device_id": "system", 00:15:16.087 "dma_device_type": 1 00:15:16.087 }, 00:15:16.087 { 00:15:16.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.087 "dma_device_type": 2 00:15:16.087 } 00:15:16.087 ], 00:15:16.087 "driver_specific": { 00:15:16.087 "raid": { 00:15:16.087 "uuid": "3bd88129-1e51-4520-bfd7-bb8146322aaa", 00:15:16.087 "strip_size_kb": 64, 00:15:16.087 "state": "online", 00:15:16.087 "raid_level": "raid0", 00:15:16.087 "superblock": true, 00:15:16.087 "num_base_bdevs": 4, 00:15:16.087 "num_base_bdevs_discovered": 4, 00:15:16.087 "num_base_bdevs_operational": 4, 00:15:16.087 "base_bdevs_list": [ 00:15:16.087 { 00:15:16.087 "name": "BaseBdev1", 00:15:16.087 "uuid": "9ca48d9b-ddd3-4e93-9aa2-cc80486e3e79", 00:15:16.087 "is_configured": true, 00:15:16.087 "data_offset": 2048, 00:15:16.087 "data_size": 63488 00:15:16.087 }, 00:15:16.087 { 00:15:16.087 "name": "BaseBdev2", 00:15:16.087 "uuid": "7f7d8e0d-f6d8-4b6a-8f87-cac8bd1235e3", 00:15:16.087 "is_configured": true, 00:15:16.087 "data_offset": 2048, 00:15:16.087 "data_size": 63488 00:15:16.087 }, 00:15:16.087 { 00:15:16.087 "name": "BaseBdev3", 00:15:16.087 "uuid": "12d2fa2a-982a-49c9-bf57-e5a174be09ca", 00:15:16.087 "is_configured": true, 00:15:16.087 "data_offset": 2048, 00:15:16.087 "data_size": 63488 00:15:16.087 }, 00:15:16.087 { 00:15:16.087 "name": "BaseBdev4", 00:15:16.087 "uuid": "24055ab6-b6f9-4cb5-894d-2e86f81869c3", 00:15:16.087 "is_configured": true, 00:15:16.087 "data_offset": 2048, 00:15:16.087 "data_size": 63488 00:15:16.087 } 00:15:16.087 ] 00:15:16.087 } 00:15:16.087 } 00:15:16.087 }' 00:15:16.087 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:16.087 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:16.087 BaseBdev2 00:15:16.087 BaseBdev3 00:15:16.087 BaseBdev4' 00:15:16.087 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.087 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:16.087 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:16.087 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:16.087 06:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.087 06:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.087 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.087 06:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.087 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:16.087 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:16.087 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:16.087 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:16.087 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.087 06:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.087 06:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.087 06:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.087 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:16.087 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:16.087 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.345 [2024-12-06 06:41:34.833812] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:16.345 [2024-12-06 06:41:34.833854] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:16.345 [2024-12-06 06:41:34.833922] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.345 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.346 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.346 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.346 06:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.346 06:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.346 06:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.346 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.346 "name": "Existed_Raid", 00:15:16.346 "uuid": "3bd88129-1e51-4520-bfd7-bb8146322aaa", 00:15:16.346 "strip_size_kb": 64, 00:15:16.346 "state": "offline", 00:15:16.346 "raid_level": "raid0", 00:15:16.346 "superblock": true, 00:15:16.346 "num_base_bdevs": 4, 00:15:16.346 "num_base_bdevs_discovered": 3, 00:15:16.346 "num_base_bdevs_operational": 3, 00:15:16.346 "base_bdevs_list": [ 00:15:16.346 { 00:15:16.346 "name": null, 00:15:16.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.346 "is_configured": false, 00:15:16.346 "data_offset": 0, 00:15:16.346 "data_size": 63488 00:15:16.346 }, 00:15:16.346 { 00:15:16.346 "name": "BaseBdev2", 00:15:16.346 "uuid": "7f7d8e0d-f6d8-4b6a-8f87-cac8bd1235e3", 00:15:16.346 "is_configured": true, 00:15:16.346 "data_offset": 2048, 00:15:16.346 "data_size": 63488 00:15:16.346 }, 00:15:16.346 { 00:15:16.346 "name": "BaseBdev3", 00:15:16.346 "uuid": "12d2fa2a-982a-49c9-bf57-e5a174be09ca", 00:15:16.346 "is_configured": true, 00:15:16.346 "data_offset": 2048, 00:15:16.346 "data_size": 63488 00:15:16.346 }, 00:15:16.346 { 00:15:16.346 "name": "BaseBdev4", 00:15:16.346 "uuid": "24055ab6-b6f9-4cb5-894d-2e86f81869c3", 00:15:16.346 "is_configured": true, 00:15:16.346 "data_offset": 2048, 00:15:16.346 "data_size": 63488 00:15:16.346 } 00:15:16.346 ] 00:15:16.346 }' 00:15:16.346 06:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.346 06:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.911 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:16.911 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:16.911 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.911 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.911 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.911 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:16.911 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.911 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:16.912 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:16.912 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:16.912 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.912 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.912 [2024-12-06 06:41:35.452354] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:16.912 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.912 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:16.912 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:16.912 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.912 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:16.912 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.912 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.176 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.176 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:17.176 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:17.176 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:17.176 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.176 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.176 [2024-12-06 06:41:35.595185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:17.176 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.176 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:17.176 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:17.176 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:17.176 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.176 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.176 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.176 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.176 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:17.176 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:17.176 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:17.176 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.176 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.176 [2024-12-06 06:41:35.732110] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:17.176 [2024-12-06 06:41:35.732306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:17.433 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.433 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:17.433 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:17.433 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.433 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:17.433 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.433 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.433 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.433 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:17.433 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:17.433 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:17.433 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:17.433 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:17.433 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:17.433 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.433 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.433 BaseBdev2 00:15:17.433 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.433 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:17.433 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:17.433 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:17.433 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:17.433 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:17.433 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:17.433 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:17.433 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.433 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.433 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.433 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:17.433 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.433 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.433 [ 00:15:17.433 { 00:15:17.433 "name": "BaseBdev2", 00:15:17.433 "aliases": [ 00:15:17.433 "4d4083fc-2653-4d6f-b032-4e89d443a903" 00:15:17.434 ], 00:15:17.434 "product_name": "Malloc disk", 00:15:17.434 "block_size": 512, 00:15:17.434 "num_blocks": 65536, 00:15:17.434 "uuid": "4d4083fc-2653-4d6f-b032-4e89d443a903", 00:15:17.434 "assigned_rate_limits": { 00:15:17.434 "rw_ios_per_sec": 0, 00:15:17.434 "rw_mbytes_per_sec": 0, 00:15:17.434 "r_mbytes_per_sec": 0, 00:15:17.434 "w_mbytes_per_sec": 0 00:15:17.434 }, 00:15:17.434 "claimed": false, 00:15:17.434 "zoned": false, 00:15:17.434 "supported_io_types": { 00:15:17.434 "read": true, 00:15:17.434 "write": true, 00:15:17.434 "unmap": true, 00:15:17.434 "flush": true, 00:15:17.434 "reset": true, 00:15:17.434 "nvme_admin": false, 00:15:17.434 "nvme_io": false, 00:15:17.434 "nvme_io_md": false, 00:15:17.434 "write_zeroes": true, 00:15:17.434 "zcopy": true, 00:15:17.434 "get_zone_info": false, 00:15:17.434 "zone_management": false, 00:15:17.434 "zone_append": false, 00:15:17.434 "compare": false, 00:15:17.434 "compare_and_write": false, 00:15:17.434 "abort": true, 00:15:17.434 "seek_hole": false, 00:15:17.434 "seek_data": false, 00:15:17.434 "copy": true, 00:15:17.434 "nvme_iov_md": false 00:15:17.434 }, 00:15:17.434 "memory_domains": [ 00:15:17.434 { 00:15:17.434 "dma_device_id": "system", 00:15:17.434 "dma_device_type": 1 00:15:17.434 }, 00:15:17.434 { 00:15:17.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.434 "dma_device_type": 2 00:15:17.434 } 00:15:17.434 ], 00:15:17.434 "driver_specific": {} 00:15:17.434 } 00:15:17.434 ] 00:15:17.434 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.434 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:17.434 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:17.434 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:17.434 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:17.434 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.434 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.434 BaseBdev3 00:15:17.434 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.434 06:41:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:17.434 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:17.434 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:17.434 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:17.434 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:17.434 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:17.434 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:17.434 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.434 06:41:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.434 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.434 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:17.434 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.434 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.434 [ 00:15:17.434 { 00:15:17.434 "name": "BaseBdev3", 00:15:17.434 "aliases": [ 00:15:17.434 "3874ccee-7faf-4189-98a6-cbbe772a102b" 00:15:17.434 ], 00:15:17.434 "product_name": "Malloc disk", 00:15:17.434 "block_size": 512, 00:15:17.434 "num_blocks": 65536, 00:15:17.434 "uuid": "3874ccee-7faf-4189-98a6-cbbe772a102b", 00:15:17.434 "assigned_rate_limits": { 00:15:17.434 "rw_ios_per_sec": 0, 00:15:17.434 "rw_mbytes_per_sec": 0, 00:15:17.434 "r_mbytes_per_sec": 0, 00:15:17.434 "w_mbytes_per_sec": 0 00:15:17.434 }, 00:15:17.434 "claimed": false, 00:15:17.434 "zoned": false, 00:15:17.434 "supported_io_types": { 00:15:17.434 "read": true, 00:15:17.434 "write": true, 00:15:17.434 "unmap": true, 00:15:17.434 "flush": true, 00:15:17.434 "reset": true, 00:15:17.434 "nvme_admin": false, 00:15:17.434 "nvme_io": false, 00:15:17.434 "nvme_io_md": false, 00:15:17.434 "write_zeroes": true, 00:15:17.434 "zcopy": true, 00:15:17.434 "get_zone_info": false, 00:15:17.434 "zone_management": false, 00:15:17.434 "zone_append": false, 00:15:17.434 "compare": false, 00:15:17.434 "compare_and_write": false, 00:15:17.434 "abort": true, 00:15:17.434 "seek_hole": false, 00:15:17.434 "seek_data": false, 00:15:17.434 "copy": true, 00:15:17.434 "nvme_iov_md": false 00:15:17.434 }, 00:15:17.434 "memory_domains": [ 00:15:17.434 { 00:15:17.434 "dma_device_id": "system", 00:15:17.434 "dma_device_type": 1 00:15:17.434 }, 00:15:17.434 { 00:15:17.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.434 "dma_device_type": 2 00:15:17.434 } 00:15:17.434 ], 00:15:17.434 "driver_specific": {} 00:15:17.434 } 00:15:17.434 ] 00:15:17.434 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.434 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:17.434 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:17.434 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:17.434 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:17.434 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.434 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.434 BaseBdev4 00:15:17.434 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.434 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:17.434 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:17.434 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:17.434 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:17.434 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:17.434 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:17.434 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:17.434 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.434 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.693 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.693 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:17.693 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.693 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.693 [ 00:15:17.693 { 00:15:17.693 "name": "BaseBdev4", 00:15:17.693 "aliases": [ 00:15:17.693 "1b36698c-c217-4c77-a236-ed3355932623" 00:15:17.693 ], 00:15:17.693 "product_name": "Malloc disk", 00:15:17.693 "block_size": 512, 00:15:17.693 "num_blocks": 65536, 00:15:17.693 "uuid": "1b36698c-c217-4c77-a236-ed3355932623", 00:15:17.693 "assigned_rate_limits": { 00:15:17.693 "rw_ios_per_sec": 0, 00:15:17.693 "rw_mbytes_per_sec": 0, 00:15:17.693 "r_mbytes_per_sec": 0, 00:15:17.693 "w_mbytes_per_sec": 0 00:15:17.693 }, 00:15:17.693 "claimed": false, 00:15:17.693 "zoned": false, 00:15:17.693 "supported_io_types": { 00:15:17.693 "read": true, 00:15:17.693 "write": true, 00:15:17.693 "unmap": true, 00:15:17.693 "flush": true, 00:15:17.693 "reset": true, 00:15:17.693 "nvme_admin": false, 00:15:17.693 "nvme_io": false, 00:15:17.693 "nvme_io_md": false, 00:15:17.693 "write_zeroes": true, 00:15:17.693 "zcopy": true, 00:15:17.693 "get_zone_info": false, 00:15:17.693 "zone_management": false, 00:15:17.693 "zone_append": false, 00:15:17.693 "compare": false, 00:15:17.693 "compare_and_write": false, 00:15:17.693 "abort": true, 00:15:17.693 "seek_hole": false, 00:15:17.693 "seek_data": false, 00:15:17.693 "copy": true, 00:15:17.693 "nvme_iov_md": false 00:15:17.693 }, 00:15:17.693 "memory_domains": [ 00:15:17.693 { 00:15:17.693 "dma_device_id": "system", 00:15:17.693 "dma_device_type": 1 00:15:17.693 }, 00:15:17.693 { 00:15:17.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.693 "dma_device_type": 2 00:15:17.693 } 00:15:17.693 ], 00:15:17.693 "driver_specific": {} 00:15:17.693 } 00:15:17.693 ] 00:15:17.693 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.693 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:17.693 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:17.693 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:17.693 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:17.693 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.693 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.693 [2024-12-06 06:41:36.107583] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:17.693 [2024-12-06 06:41:36.107642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:17.693 [2024-12-06 06:41:36.107683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:17.693 [2024-12-06 06:41:36.110326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:17.693 [2024-12-06 06:41:36.110546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:17.693 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.693 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:17.693 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.693 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:17.693 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:17.693 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.693 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:17.693 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.693 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.693 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.693 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.693 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.693 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.693 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.693 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.693 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.693 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.693 "name": "Existed_Raid", 00:15:17.693 "uuid": "a5dc1e7d-d61e-47a9-bfb5-70d0267ed811", 00:15:17.693 "strip_size_kb": 64, 00:15:17.693 "state": "configuring", 00:15:17.693 "raid_level": "raid0", 00:15:17.693 "superblock": true, 00:15:17.693 "num_base_bdevs": 4, 00:15:17.693 "num_base_bdevs_discovered": 3, 00:15:17.693 "num_base_bdevs_operational": 4, 00:15:17.693 "base_bdevs_list": [ 00:15:17.693 { 00:15:17.693 "name": "BaseBdev1", 00:15:17.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.693 "is_configured": false, 00:15:17.693 "data_offset": 0, 00:15:17.693 "data_size": 0 00:15:17.693 }, 00:15:17.693 { 00:15:17.693 "name": "BaseBdev2", 00:15:17.693 "uuid": "4d4083fc-2653-4d6f-b032-4e89d443a903", 00:15:17.693 "is_configured": true, 00:15:17.693 "data_offset": 2048, 00:15:17.693 "data_size": 63488 00:15:17.693 }, 00:15:17.693 { 00:15:17.693 "name": "BaseBdev3", 00:15:17.693 "uuid": "3874ccee-7faf-4189-98a6-cbbe772a102b", 00:15:17.693 "is_configured": true, 00:15:17.693 "data_offset": 2048, 00:15:17.693 "data_size": 63488 00:15:17.693 }, 00:15:17.693 { 00:15:17.693 "name": "BaseBdev4", 00:15:17.693 "uuid": "1b36698c-c217-4c77-a236-ed3355932623", 00:15:17.693 "is_configured": true, 00:15:17.693 "data_offset": 2048, 00:15:17.693 "data_size": 63488 00:15:17.693 } 00:15:17.693 ] 00:15:17.693 }' 00:15:17.693 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.693 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.259 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:18.259 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.259 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.259 [2024-12-06 06:41:36.631699] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:18.259 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.259 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:18.259 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.259 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.259 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:18.259 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.259 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:18.259 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.259 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.259 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.259 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.259 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.259 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.259 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.259 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.260 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.260 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.260 "name": "Existed_Raid", 00:15:18.260 "uuid": "a5dc1e7d-d61e-47a9-bfb5-70d0267ed811", 00:15:18.260 "strip_size_kb": 64, 00:15:18.260 "state": "configuring", 00:15:18.260 "raid_level": "raid0", 00:15:18.260 "superblock": true, 00:15:18.260 "num_base_bdevs": 4, 00:15:18.260 "num_base_bdevs_discovered": 2, 00:15:18.260 "num_base_bdevs_operational": 4, 00:15:18.260 "base_bdevs_list": [ 00:15:18.260 { 00:15:18.260 "name": "BaseBdev1", 00:15:18.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.260 "is_configured": false, 00:15:18.260 "data_offset": 0, 00:15:18.260 "data_size": 0 00:15:18.260 }, 00:15:18.260 { 00:15:18.260 "name": null, 00:15:18.260 "uuid": "4d4083fc-2653-4d6f-b032-4e89d443a903", 00:15:18.260 "is_configured": false, 00:15:18.260 "data_offset": 0, 00:15:18.260 "data_size": 63488 00:15:18.260 }, 00:15:18.260 { 00:15:18.260 "name": "BaseBdev3", 00:15:18.260 "uuid": "3874ccee-7faf-4189-98a6-cbbe772a102b", 00:15:18.260 "is_configured": true, 00:15:18.260 "data_offset": 2048, 00:15:18.260 "data_size": 63488 00:15:18.260 }, 00:15:18.260 { 00:15:18.260 "name": "BaseBdev4", 00:15:18.260 "uuid": "1b36698c-c217-4c77-a236-ed3355932623", 00:15:18.260 "is_configured": true, 00:15:18.260 "data_offset": 2048, 00:15:18.260 "data_size": 63488 00:15:18.260 } 00:15:18.260 ] 00:15:18.260 }' 00:15:18.260 06:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.260 06:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.519 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:18.519 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.519 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.519 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.519 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.780 [2024-12-06 06:41:37.230949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:18.780 BaseBdev1 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.780 [ 00:15:18.780 { 00:15:18.780 "name": "BaseBdev1", 00:15:18.780 "aliases": [ 00:15:18.780 "53fb9028-7c4e-4289-9344-bd3974a98019" 00:15:18.780 ], 00:15:18.780 "product_name": "Malloc disk", 00:15:18.780 "block_size": 512, 00:15:18.780 "num_blocks": 65536, 00:15:18.780 "uuid": "53fb9028-7c4e-4289-9344-bd3974a98019", 00:15:18.780 "assigned_rate_limits": { 00:15:18.780 "rw_ios_per_sec": 0, 00:15:18.780 "rw_mbytes_per_sec": 0, 00:15:18.780 "r_mbytes_per_sec": 0, 00:15:18.780 "w_mbytes_per_sec": 0 00:15:18.780 }, 00:15:18.780 "claimed": true, 00:15:18.780 "claim_type": "exclusive_write", 00:15:18.780 "zoned": false, 00:15:18.780 "supported_io_types": { 00:15:18.780 "read": true, 00:15:18.780 "write": true, 00:15:18.780 "unmap": true, 00:15:18.780 "flush": true, 00:15:18.780 "reset": true, 00:15:18.780 "nvme_admin": false, 00:15:18.780 "nvme_io": false, 00:15:18.780 "nvme_io_md": false, 00:15:18.780 "write_zeroes": true, 00:15:18.780 "zcopy": true, 00:15:18.780 "get_zone_info": false, 00:15:18.780 "zone_management": false, 00:15:18.780 "zone_append": false, 00:15:18.780 "compare": false, 00:15:18.780 "compare_and_write": false, 00:15:18.780 "abort": true, 00:15:18.780 "seek_hole": false, 00:15:18.780 "seek_data": false, 00:15:18.780 "copy": true, 00:15:18.780 "nvme_iov_md": false 00:15:18.780 }, 00:15:18.780 "memory_domains": [ 00:15:18.780 { 00:15:18.780 "dma_device_id": "system", 00:15:18.780 "dma_device_type": 1 00:15:18.780 }, 00:15:18.780 { 00:15:18.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.780 "dma_device_type": 2 00:15:18.780 } 00:15:18.780 ], 00:15:18.780 "driver_specific": {} 00:15:18.780 } 00:15:18.780 ] 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.780 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.780 "name": "Existed_Raid", 00:15:18.780 "uuid": "a5dc1e7d-d61e-47a9-bfb5-70d0267ed811", 00:15:18.780 "strip_size_kb": 64, 00:15:18.780 "state": "configuring", 00:15:18.780 "raid_level": "raid0", 00:15:18.780 "superblock": true, 00:15:18.780 "num_base_bdevs": 4, 00:15:18.780 "num_base_bdevs_discovered": 3, 00:15:18.780 "num_base_bdevs_operational": 4, 00:15:18.780 "base_bdevs_list": [ 00:15:18.780 { 00:15:18.780 "name": "BaseBdev1", 00:15:18.780 "uuid": "53fb9028-7c4e-4289-9344-bd3974a98019", 00:15:18.780 "is_configured": true, 00:15:18.780 "data_offset": 2048, 00:15:18.780 "data_size": 63488 00:15:18.780 }, 00:15:18.780 { 00:15:18.781 "name": null, 00:15:18.781 "uuid": "4d4083fc-2653-4d6f-b032-4e89d443a903", 00:15:18.781 "is_configured": false, 00:15:18.781 "data_offset": 0, 00:15:18.781 "data_size": 63488 00:15:18.781 }, 00:15:18.781 { 00:15:18.781 "name": "BaseBdev3", 00:15:18.781 "uuid": "3874ccee-7faf-4189-98a6-cbbe772a102b", 00:15:18.781 "is_configured": true, 00:15:18.781 "data_offset": 2048, 00:15:18.781 "data_size": 63488 00:15:18.781 }, 00:15:18.781 { 00:15:18.781 "name": "BaseBdev4", 00:15:18.781 "uuid": "1b36698c-c217-4c77-a236-ed3355932623", 00:15:18.781 "is_configured": true, 00:15:18.781 "data_offset": 2048, 00:15:18.781 "data_size": 63488 00:15:18.781 } 00:15:18.781 ] 00:15:18.781 }' 00:15:18.781 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.781 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.349 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.349 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.349 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.349 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:19.349 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.349 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:19.349 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:19.349 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.349 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.349 [2024-12-06 06:41:37.803277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:19.349 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.349 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:19.349 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.349 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.349 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:19.349 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.349 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:19.349 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.349 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.349 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.349 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.349 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.349 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.349 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.349 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.349 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.349 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.349 "name": "Existed_Raid", 00:15:19.349 "uuid": "a5dc1e7d-d61e-47a9-bfb5-70d0267ed811", 00:15:19.349 "strip_size_kb": 64, 00:15:19.349 "state": "configuring", 00:15:19.349 "raid_level": "raid0", 00:15:19.349 "superblock": true, 00:15:19.349 "num_base_bdevs": 4, 00:15:19.349 "num_base_bdevs_discovered": 2, 00:15:19.349 "num_base_bdevs_operational": 4, 00:15:19.349 "base_bdevs_list": [ 00:15:19.349 { 00:15:19.349 "name": "BaseBdev1", 00:15:19.349 "uuid": "53fb9028-7c4e-4289-9344-bd3974a98019", 00:15:19.350 "is_configured": true, 00:15:19.350 "data_offset": 2048, 00:15:19.350 "data_size": 63488 00:15:19.350 }, 00:15:19.350 { 00:15:19.350 "name": null, 00:15:19.350 "uuid": "4d4083fc-2653-4d6f-b032-4e89d443a903", 00:15:19.350 "is_configured": false, 00:15:19.350 "data_offset": 0, 00:15:19.350 "data_size": 63488 00:15:19.350 }, 00:15:19.350 { 00:15:19.350 "name": null, 00:15:19.350 "uuid": "3874ccee-7faf-4189-98a6-cbbe772a102b", 00:15:19.350 "is_configured": false, 00:15:19.350 "data_offset": 0, 00:15:19.350 "data_size": 63488 00:15:19.350 }, 00:15:19.350 { 00:15:19.350 "name": "BaseBdev4", 00:15:19.350 "uuid": "1b36698c-c217-4c77-a236-ed3355932623", 00:15:19.350 "is_configured": true, 00:15:19.350 "data_offset": 2048, 00:15:19.350 "data_size": 63488 00:15:19.350 } 00:15:19.350 ] 00:15:19.350 }' 00:15:19.350 06:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.350 06:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.917 06:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:19.917 06:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.917 06:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.917 06:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.917 06:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.917 06:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:19.917 06:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:19.917 06:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.917 06:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.917 [2024-12-06 06:41:38.371410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:19.917 06:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.917 06:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:19.917 06:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.917 06:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.917 06:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:19.917 06:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.917 06:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:19.917 06:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.917 06:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.917 06:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.917 06:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.917 06:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.917 06:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.917 06:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.917 06:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.917 06:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.917 06:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.917 "name": "Existed_Raid", 00:15:19.917 "uuid": "a5dc1e7d-d61e-47a9-bfb5-70d0267ed811", 00:15:19.917 "strip_size_kb": 64, 00:15:19.917 "state": "configuring", 00:15:19.917 "raid_level": "raid0", 00:15:19.917 "superblock": true, 00:15:19.917 "num_base_bdevs": 4, 00:15:19.917 "num_base_bdevs_discovered": 3, 00:15:19.917 "num_base_bdevs_operational": 4, 00:15:19.917 "base_bdevs_list": [ 00:15:19.917 { 00:15:19.917 "name": "BaseBdev1", 00:15:19.917 "uuid": "53fb9028-7c4e-4289-9344-bd3974a98019", 00:15:19.917 "is_configured": true, 00:15:19.917 "data_offset": 2048, 00:15:19.917 "data_size": 63488 00:15:19.917 }, 00:15:19.917 { 00:15:19.917 "name": null, 00:15:19.917 "uuid": "4d4083fc-2653-4d6f-b032-4e89d443a903", 00:15:19.917 "is_configured": false, 00:15:19.917 "data_offset": 0, 00:15:19.917 "data_size": 63488 00:15:19.917 }, 00:15:19.917 { 00:15:19.917 "name": "BaseBdev3", 00:15:19.917 "uuid": "3874ccee-7faf-4189-98a6-cbbe772a102b", 00:15:19.917 "is_configured": true, 00:15:19.917 "data_offset": 2048, 00:15:19.917 "data_size": 63488 00:15:19.917 }, 00:15:19.917 { 00:15:19.917 "name": "BaseBdev4", 00:15:19.917 "uuid": "1b36698c-c217-4c77-a236-ed3355932623", 00:15:19.917 "is_configured": true, 00:15:19.917 "data_offset": 2048, 00:15:19.917 "data_size": 63488 00:15:19.917 } 00:15:19.917 ] 00:15:19.917 }' 00:15:19.917 06:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.917 06:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.485 06:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.485 06:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:20.485 06:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.485 06:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.485 06:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.485 06:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:20.485 06:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:20.485 06:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.485 06:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.485 [2024-12-06 06:41:38.963643] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:20.485 06:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.485 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:20.485 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.485 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.485 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:20.485 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.485 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:20.485 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.485 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.485 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.485 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.485 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.485 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.485 06:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.485 06:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.485 06:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.486 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.486 "name": "Existed_Raid", 00:15:20.486 "uuid": "a5dc1e7d-d61e-47a9-bfb5-70d0267ed811", 00:15:20.486 "strip_size_kb": 64, 00:15:20.486 "state": "configuring", 00:15:20.486 "raid_level": "raid0", 00:15:20.486 "superblock": true, 00:15:20.486 "num_base_bdevs": 4, 00:15:20.486 "num_base_bdevs_discovered": 2, 00:15:20.486 "num_base_bdevs_operational": 4, 00:15:20.486 "base_bdevs_list": [ 00:15:20.486 { 00:15:20.486 "name": null, 00:15:20.486 "uuid": "53fb9028-7c4e-4289-9344-bd3974a98019", 00:15:20.486 "is_configured": false, 00:15:20.486 "data_offset": 0, 00:15:20.486 "data_size": 63488 00:15:20.486 }, 00:15:20.486 { 00:15:20.486 "name": null, 00:15:20.486 "uuid": "4d4083fc-2653-4d6f-b032-4e89d443a903", 00:15:20.486 "is_configured": false, 00:15:20.486 "data_offset": 0, 00:15:20.486 "data_size": 63488 00:15:20.486 }, 00:15:20.486 { 00:15:20.486 "name": "BaseBdev3", 00:15:20.486 "uuid": "3874ccee-7faf-4189-98a6-cbbe772a102b", 00:15:20.486 "is_configured": true, 00:15:20.486 "data_offset": 2048, 00:15:20.486 "data_size": 63488 00:15:20.486 }, 00:15:20.486 { 00:15:20.486 "name": "BaseBdev4", 00:15:20.486 "uuid": "1b36698c-c217-4c77-a236-ed3355932623", 00:15:20.486 "is_configured": true, 00:15:20.486 "data_offset": 2048, 00:15:20.486 "data_size": 63488 00:15:20.486 } 00:15:20.486 ] 00:15:20.486 }' 00:15:20.486 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.486 06:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.058 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.059 06:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.059 06:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.059 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:21.059 06:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.059 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:21.059 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:21.059 06:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.059 06:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.059 [2024-12-06 06:41:39.638887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:21.059 06:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.059 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:21.059 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.059 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:21.059 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:21.059 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.059 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:21.059 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.059 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.059 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.059 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.059 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.059 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.059 06:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.059 06:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.059 06:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.059 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.059 "name": "Existed_Raid", 00:15:21.059 "uuid": "a5dc1e7d-d61e-47a9-bfb5-70d0267ed811", 00:15:21.059 "strip_size_kb": 64, 00:15:21.059 "state": "configuring", 00:15:21.059 "raid_level": "raid0", 00:15:21.059 "superblock": true, 00:15:21.059 "num_base_bdevs": 4, 00:15:21.059 "num_base_bdevs_discovered": 3, 00:15:21.059 "num_base_bdevs_operational": 4, 00:15:21.059 "base_bdevs_list": [ 00:15:21.059 { 00:15:21.059 "name": null, 00:15:21.059 "uuid": "53fb9028-7c4e-4289-9344-bd3974a98019", 00:15:21.059 "is_configured": false, 00:15:21.059 "data_offset": 0, 00:15:21.059 "data_size": 63488 00:15:21.059 }, 00:15:21.059 { 00:15:21.059 "name": "BaseBdev2", 00:15:21.059 "uuid": "4d4083fc-2653-4d6f-b032-4e89d443a903", 00:15:21.059 "is_configured": true, 00:15:21.059 "data_offset": 2048, 00:15:21.059 "data_size": 63488 00:15:21.059 }, 00:15:21.059 { 00:15:21.059 "name": "BaseBdev3", 00:15:21.059 "uuid": "3874ccee-7faf-4189-98a6-cbbe772a102b", 00:15:21.059 "is_configured": true, 00:15:21.059 "data_offset": 2048, 00:15:21.059 "data_size": 63488 00:15:21.059 }, 00:15:21.059 { 00:15:21.059 "name": "BaseBdev4", 00:15:21.059 "uuid": "1b36698c-c217-4c77-a236-ed3355932623", 00:15:21.059 "is_configured": true, 00:15:21.059 "data_offset": 2048, 00:15:21.059 "data_size": 63488 00:15:21.059 } 00:15:21.059 ] 00:15:21.059 }' 00:15:21.059 06:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.059 06:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.664 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.664 06:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.664 06:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.664 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:21.664 06:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.664 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:21.664 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:21.664 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.664 06:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.664 06:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.664 06:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.664 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 53fb9028-7c4e-4289-9344-bd3974a98019 00:15:21.664 06:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.664 06:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.928 [2024-12-06 06:41:40.314112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:21.928 [2024-12-06 06:41:40.314625] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:21.928 [2024-12-06 06:41:40.314650] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:21.928 NewBaseBdev 00:15:21.928 [2024-12-06 06:41:40.314984] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:21.928 [2024-12-06 06:41:40.315157] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:21.928 [2024-12-06 06:41:40.315178] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:21.928 [2024-12-06 06:41:40.315335] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.928 06:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.928 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:21.928 06:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:21.928 06:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:21.928 06:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:21.928 06:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:21.928 06:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:21.928 06:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:21.928 06:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.928 06:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.928 06:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.928 06:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:21.928 06:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.928 06:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.928 [ 00:15:21.928 { 00:15:21.928 "name": "NewBaseBdev", 00:15:21.928 "aliases": [ 00:15:21.928 "53fb9028-7c4e-4289-9344-bd3974a98019" 00:15:21.928 ], 00:15:21.928 "product_name": "Malloc disk", 00:15:21.928 "block_size": 512, 00:15:21.928 "num_blocks": 65536, 00:15:21.928 "uuid": "53fb9028-7c4e-4289-9344-bd3974a98019", 00:15:21.928 "assigned_rate_limits": { 00:15:21.928 "rw_ios_per_sec": 0, 00:15:21.928 "rw_mbytes_per_sec": 0, 00:15:21.928 "r_mbytes_per_sec": 0, 00:15:21.928 "w_mbytes_per_sec": 0 00:15:21.928 }, 00:15:21.928 "claimed": true, 00:15:21.928 "claim_type": "exclusive_write", 00:15:21.928 "zoned": false, 00:15:21.928 "supported_io_types": { 00:15:21.928 "read": true, 00:15:21.928 "write": true, 00:15:21.928 "unmap": true, 00:15:21.928 "flush": true, 00:15:21.928 "reset": true, 00:15:21.928 "nvme_admin": false, 00:15:21.928 "nvme_io": false, 00:15:21.928 "nvme_io_md": false, 00:15:21.928 "write_zeroes": true, 00:15:21.928 "zcopy": true, 00:15:21.929 "get_zone_info": false, 00:15:21.929 "zone_management": false, 00:15:21.929 "zone_append": false, 00:15:21.929 "compare": false, 00:15:21.929 "compare_and_write": false, 00:15:21.929 "abort": true, 00:15:21.929 "seek_hole": false, 00:15:21.929 "seek_data": false, 00:15:21.929 "copy": true, 00:15:21.929 "nvme_iov_md": false 00:15:21.929 }, 00:15:21.929 "memory_domains": [ 00:15:21.929 { 00:15:21.929 "dma_device_id": "system", 00:15:21.929 "dma_device_type": 1 00:15:21.929 }, 00:15:21.929 { 00:15:21.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.929 "dma_device_type": 2 00:15:21.929 } 00:15:21.929 ], 00:15:21.929 "driver_specific": {} 00:15:21.929 } 00:15:21.929 ] 00:15:21.929 06:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.929 06:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:21.929 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:21.929 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.929 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.929 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:21.929 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.929 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:21.929 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.929 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.929 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.929 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.929 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.929 06:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.929 06:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.929 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.929 06:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.929 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.929 "name": "Existed_Raid", 00:15:21.929 "uuid": "a5dc1e7d-d61e-47a9-bfb5-70d0267ed811", 00:15:21.929 "strip_size_kb": 64, 00:15:21.929 "state": "online", 00:15:21.929 "raid_level": "raid0", 00:15:21.929 "superblock": true, 00:15:21.929 "num_base_bdevs": 4, 00:15:21.929 "num_base_bdevs_discovered": 4, 00:15:21.929 "num_base_bdevs_operational": 4, 00:15:21.929 "base_bdevs_list": [ 00:15:21.929 { 00:15:21.929 "name": "NewBaseBdev", 00:15:21.929 "uuid": "53fb9028-7c4e-4289-9344-bd3974a98019", 00:15:21.929 "is_configured": true, 00:15:21.929 "data_offset": 2048, 00:15:21.929 "data_size": 63488 00:15:21.929 }, 00:15:21.929 { 00:15:21.929 "name": "BaseBdev2", 00:15:21.929 "uuid": "4d4083fc-2653-4d6f-b032-4e89d443a903", 00:15:21.929 "is_configured": true, 00:15:21.929 "data_offset": 2048, 00:15:21.929 "data_size": 63488 00:15:21.929 }, 00:15:21.929 { 00:15:21.929 "name": "BaseBdev3", 00:15:21.929 "uuid": "3874ccee-7faf-4189-98a6-cbbe772a102b", 00:15:21.929 "is_configured": true, 00:15:21.929 "data_offset": 2048, 00:15:21.929 "data_size": 63488 00:15:21.929 }, 00:15:21.929 { 00:15:21.929 "name": "BaseBdev4", 00:15:21.929 "uuid": "1b36698c-c217-4c77-a236-ed3355932623", 00:15:21.929 "is_configured": true, 00:15:21.929 "data_offset": 2048, 00:15:21.929 "data_size": 63488 00:15:21.929 } 00:15:21.929 ] 00:15:21.929 }' 00:15:21.929 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.929 06:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.497 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:22.497 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:22.497 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:22.497 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:22.497 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:22.497 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:22.497 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:22.497 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:22.497 06:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.497 06:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.497 [2024-12-06 06:41:40.886834] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:22.497 06:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.497 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:22.497 "name": "Existed_Raid", 00:15:22.497 "aliases": [ 00:15:22.497 "a5dc1e7d-d61e-47a9-bfb5-70d0267ed811" 00:15:22.497 ], 00:15:22.497 "product_name": "Raid Volume", 00:15:22.497 "block_size": 512, 00:15:22.497 "num_blocks": 253952, 00:15:22.497 "uuid": "a5dc1e7d-d61e-47a9-bfb5-70d0267ed811", 00:15:22.497 "assigned_rate_limits": { 00:15:22.497 "rw_ios_per_sec": 0, 00:15:22.497 "rw_mbytes_per_sec": 0, 00:15:22.497 "r_mbytes_per_sec": 0, 00:15:22.497 "w_mbytes_per_sec": 0 00:15:22.497 }, 00:15:22.497 "claimed": false, 00:15:22.497 "zoned": false, 00:15:22.497 "supported_io_types": { 00:15:22.497 "read": true, 00:15:22.497 "write": true, 00:15:22.497 "unmap": true, 00:15:22.497 "flush": true, 00:15:22.497 "reset": true, 00:15:22.497 "nvme_admin": false, 00:15:22.497 "nvme_io": false, 00:15:22.497 "nvme_io_md": false, 00:15:22.497 "write_zeroes": true, 00:15:22.497 "zcopy": false, 00:15:22.497 "get_zone_info": false, 00:15:22.497 "zone_management": false, 00:15:22.497 "zone_append": false, 00:15:22.497 "compare": false, 00:15:22.497 "compare_and_write": false, 00:15:22.497 "abort": false, 00:15:22.497 "seek_hole": false, 00:15:22.497 "seek_data": false, 00:15:22.497 "copy": false, 00:15:22.497 "nvme_iov_md": false 00:15:22.497 }, 00:15:22.497 "memory_domains": [ 00:15:22.497 { 00:15:22.497 "dma_device_id": "system", 00:15:22.497 "dma_device_type": 1 00:15:22.497 }, 00:15:22.497 { 00:15:22.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.497 "dma_device_type": 2 00:15:22.497 }, 00:15:22.497 { 00:15:22.497 "dma_device_id": "system", 00:15:22.497 "dma_device_type": 1 00:15:22.497 }, 00:15:22.497 { 00:15:22.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.497 "dma_device_type": 2 00:15:22.497 }, 00:15:22.497 { 00:15:22.497 "dma_device_id": "system", 00:15:22.497 "dma_device_type": 1 00:15:22.497 }, 00:15:22.497 { 00:15:22.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.497 "dma_device_type": 2 00:15:22.497 }, 00:15:22.497 { 00:15:22.497 "dma_device_id": "system", 00:15:22.497 "dma_device_type": 1 00:15:22.497 }, 00:15:22.497 { 00:15:22.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.497 "dma_device_type": 2 00:15:22.497 } 00:15:22.497 ], 00:15:22.497 "driver_specific": { 00:15:22.497 "raid": { 00:15:22.497 "uuid": "a5dc1e7d-d61e-47a9-bfb5-70d0267ed811", 00:15:22.497 "strip_size_kb": 64, 00:15:22.497 "state": "online", 00:15:22.497 "raid_level": "raid0", 00:15:22.497 "superblock": true, 00:15:22.497 "num_base_bdevs": 4, 00:15:22.497 "num_base_bdevs_discovered": 4, 00:15:22.497 "num_base_bdevs_operational": 4, 00:15:22.497 "base_bdevs_list": [ 00:15:22.497 { 00:15:22.497 "name": "NewBaseBdev", 00:15:22.497 "uuid": "53fb9028-7c4e-4289-9344-bd3974a98019", 00:15:22.497 "is_configured": true, 00:15:22.497 "data_offset": 2048, 00:15:22.497 "data_size": 63488 00:15:22.497 }, 00:15:22.497 { 00:15:22.497 "name": "BaseBdev2", 00:15:22.497 "uuid": "4d4083fc-2653-4d6f-b032-4e89d443a903", 00:15:22.497 "is_configured": true, 00:15:22.497 "data_offset": 2048, 00:15:22.497 "data_size": 63488 00:15:22.497 }, 00:15:22.497 { 00:15:22.497 "name": "BaseBdev3", 00:15:22.497 "uuid": "3874ccee-7faf-4189-98a6-cbbe772a102b", 00:15:22.497 "is_configured": true, 00:15:22.497 "data_offset": 2048, 00:15:22.497 "data_size": 63488 00:15:22.497 }, 00:15:22.497 { 00:15:22.497 "name": "BaseBdev4", 00:15:22.497 "uuid": "1b36698c-c217-4c77-a236-ed3355932623", 00:15:22.497 "is_configured": true, 00:15:22.497 "data_offset": 2048, 00:15:22.497 "data_size": 63488 00:15:22.497 } 00:15:22.497 ] 00:15:22.497 } 00:15:22.497 } 00:15:22.497 }' 00:15:22.497 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:22.497 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:22.497 BaseBdev2 00:15:22.497 BaseBdev3 00:15:22.497 BaseBdev4' 00:15:22.497 06:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.497 06:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:22.497 06:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:22.497 06:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:22.497 06:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.497 06:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.497 06:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.497 06:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.497 06:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:22.497 06:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:22.497 06:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:22.497 06:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.497 06:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:22.497 06:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.497 06:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.497 06:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.756 [2024-12-06 06:41:41.286428] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:22.756 [2024-12-06 06:41:41.286632] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:22.756 [2024-12-06 06:41:41.286756] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.756 [2024-12-06 06:41:41.286854] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:22.756 [2024-12-06 06:41:41.286872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70338 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70338 ']' 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70338 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70338 00:15:22.756 killing process with pid 70338 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70338' 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70338 00:15:22.756 [2024-12-06 06:41:41.325990] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:22.756 06:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70338 00:15:23.322 [2024-12-06 06:41:41.683876] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:24.258 06:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:24.258 00:15:24.258 real 0m12.753s 00:15:24.258 user 0m21.067s 00:15:24.258 sys 0m1.785s 00:15:24.258 06:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:24.258 06:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.258 ************************************ 00:15:24.258 END TEST raid_state_function_test_sb 00:15:24.258 ************************************ 00:15:24.258 06:41:42 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:15:24.258 06:41:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:24.258 06:41:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:24.258 06:41:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:24.258 ************************************ 00:15:24.258 START TEST raid_superblock_test 00:15:24.258 ************************************ 00:15:24.258 06:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:15:24.258 06:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:15:24.258 06:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:24.258 06:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:24.258 06:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:24.258 06:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:24.258 06:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:24.258 06:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:24.258 06:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:24.258 06:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:24.258 06:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:24.258 06:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:24.258 06:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:24.258 06:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:24.258 06:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:15:24.258 06:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:24.258 06:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:24.258 06:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=71020 00:15:24.258 06:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 71020 00:15:24.258 06:41:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:24.258 06:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 71020 ']' 00:15:24.258 06:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.258 06:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:24.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.258 06:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.258 06:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:24.258 06:41:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.258 [2024-12-06 06:41:42.890036] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:15:24.258 [2024-12-06 06:41:42.890397] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71020 ] 00:15:24.517 [2024-12-06 06:41:43.072826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.777 [2024-12-06 06:41:43.227741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.057 [2024-12-06 06:41:43.445606] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.057 [2024-12-06 06:41:43.445683] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.317 malloc1 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.317 [2024-12-06 06:41:43.912377] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:25.317 [2024-12-06 06:41:43.912448] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.317 [2024-12-06 06:41:43.912480] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:25.317 [2024-12-06 06:41:43.912496] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.317 [2024-12-06 06:41:43.915443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.317 [2024-12-06 06:41:43.915489] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:25.317 pt1 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.317 malloc2 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.317 06:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.317 [2024-12-06 06:41:43.960405] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:25.576 [2024-12-06 06:41:43.960637] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.576 [2024-12-06 06:41:43.960697] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:25.576 [2024-12-06 06:41:43.960717] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.576 [2024-12-06 06:41:43.963445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.576 [2024-12-06 06:41:43.963485] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:25.576 pt2 00:15:25.576 06:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.576 06:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:25.576 06:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:25.576 06:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:25.576 06:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:25.576 06:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:25.576 06:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:25.576 06:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:25.576 06:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:25.576 06:41:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:25.576 06:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.576 06:41:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.576 malloc3 00:15:25.576 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.576 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:25.576 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.576 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.576 [2024-12-06 06:41:44.019785] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:25.576 [2024-12-06 06:41:44.020010] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.576 [2024-12-06 06:41:44.020068] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:25.576 [2024-12-06 06:41:44.020090] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.576 [2024-12-06 06:41:44.022946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.576 [2024-12-06 06:41:44.022992] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:25.576 pt3 00:15:25.576 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.576 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:25.576 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:25.576 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:25.576 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:25.576 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:25.576 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:25.576 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:25.576 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:25.576 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:25.576 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.576 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.576 malloc4 00:15:25.576 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.576 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:25.576 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.576 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.576 [2024-12-06 06:41:44.072086] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:25.576 [2024-12-06 06:41:44.072160] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.576 [2024-12-06 06:41:44.072192] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:25.576 [2024-12-06 06:41:44.072207] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.576 [2024-12-06 06:41:44.075123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.576 [2024-12-06 06:41:44.075169] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:25.576 pt4 00:15:25.576 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.576 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:25.576 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:25.577 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:25.577 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.577 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.577 [2024-12-06 06:41:44.084128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:25.577 [2024-12-06 06:41:44.086615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:25.577 [2024-12-06 06:41:44.086735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:25.577 [2024-12-06 06:41:44.086807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:25.577 [2024-12-06 06:41:44.087067] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:25.577 [2024-12-06 06:41:44.087086] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:25.577 [2024-12-06 06:41:44.087425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:25.577 [2024-12-06 06:41:44.087660] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:25.577 [2024-12-06 06:41:44.087706] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:25.577 [2024-12-06 06:41:44.087943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.577 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.577 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:25.577 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.577 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.577 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:25.577 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.577 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.577 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.577 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.577 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.577 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.577 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.577 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.577 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.577 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.577 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.577 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.577 "name": "raid_bdev1", 00:15:25.577 "uuid": "dddb1718-ceea-4b02-85ed-c6ba54e78474", 00:15:25.577 "strip_size_kb": 64, 00:15:25.577 "state": "online", 00:15:25.577 "raid_level": "raid0", 00:15:25.577 "superblock": true, 00:15:25.577 "num_base_bdevs": 4, 00:15:25.577 "num_base_bdevs_discovered": 4, 00:15:25.577 "num_base_bdevs_operational": 4, 00:15:25.577 "base_bdevs_list": [ 00:15:25.577 { 00:15:25.577 "name": "pt1", 00:15:25.577 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:25.577 "is_configured": true, 00:15:25.577 "data_offset": 2048, 00:15:25.577 "data_size": 63488 00:15:25.577 }, 00:15:25.577 { 00:15:25.577 "name": "pt2", 00:15:25.577 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:25.577 "is_configured": true, 00:15:25.577 "data_offset": 2048, 00:15:25.577 "data_size": 63488 00:15:25.577 }, 00:15:25.577 { 00:15:25.577 "name": "pt3", 00:15:25.577 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:25.577 "is_configured": true, 00:15:25.577 "data_offset": 2048, 00:15:25.577 "data_size": 63488 00:15:25.577 }, 00:15:25.577 { 00:15:25.577 "name": "pt4", 00:15:25.577 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:25.577 "is_configured": true, 00:15:25.577 "data_offset": 2048, 00:15:25.577 "data_size": 63488 00:15:25.577 } 00:15:25.577 ] 00:15:25.577 }' 00:15:25.577 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.577 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.144 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:26.144 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:26.144 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:26.144 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:26.144 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:26.144 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:26.144 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:26.144 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:26.144 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.144 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.144 [2024-12-06 06:41:44.592707] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:26.144 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.144 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:26.144 "name": "raid_bdev1", 00:15:26.144 "aliases": [ 00:15:26.144 "dddb1718-ceea-4b02-85ed-c6ba54e78474" 00:15:26.144 ], 00:15:26.144 "product_name": "Raid Volume", 00:15:26.144 "block_size": 512, 00:15:26.144 "num_blocks": 253952, 00:15:26.144 "uuid": "dddb1718-ceea-4b02-85ed-c6ba54e78474", 00:15:26.144 "assigned_rate_limits": { 00:15:26.144 "rw_ios_per_sec": 0, 00:15:26.144 "rw_mbytes_per_sec": 0, 00:15:26.144 "r_mbytes_per_sec": 0, 00:15:26.144 "w_mbytes_per_sec": 0 00:15:26.144 }, 00:15:26.144 "claimed": false, 00:15:26.144 "zoned": false, 00:15:26.144 "supported_io_types": { 00:15:26.144 "read": true, 00:15:26.144 "write": true, 00:15:26.144 "unmap": true, 00:15:26.144 "flush": true, 00:15:26.144 "reset": true, 00:15:26.144 "nvme_admin": false, 00:15:26.144 "nvme_io": false, 00:15:26.144 "nvme_io_md": false, 00:15:26.144 "write_zeroes": true, 00:15:26.144 "zcopy": false, 00:15:26.144 "get_zone_info": false, 00:15:26.144 "zone_management": false, 00:15:26.144 "zone_append": false, 00:15:26.144 "compare": false, 00:15:26.144 "compare_and_write": false, 00:15:26.144 "abort": false, 00:15:26.144 "seek_hole": false, 00:15:26.144 "seek_data": false, 00:15:26.144 "copy": false, 00:15:26.144 "nvme_iov_md": false 00:15:26.144 }, 00:15:26.144 "memory_domains": [ 00:15:26.144 { 00:15:26.144 "dma_device_id": "system", 00:15:26.144 "dma_device_type": 1 00:15:26.144 }, 00:15:26.144 { 00:15:26.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.144 "dma_device_type": 2 00:15:26.144 }, 00:15:26.144 { 00:15:26.144 "dma_device_id": "system", 00:15:26.144 "dma_device_type": 1 00:15:26.144 }, 00:15:26.144 { 00:15:26.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.144 "dma_device_type": 2 00:15:26.144 }, 00:15:26.144 { 00:15:26.144 "dma_device_id": "system", 00:15:26.145 "dma_device_type": 1 00:15:26.145 }, 00:15:26.145 { 00:15:26.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.145 "dma_device_type": 2 00:15:26.145 }, 00:15:26.145 { 00:15:26.145 "dma_device_id": "system", 00:15:26.145 "dma_device_type": 1 00:15:26.145 }, 00:15:26.145 { 00:15:26.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.145 "dma_device_type": 2 00:15:26.145 } 00:15:26.145 ], 00:15:26.145 "driver_specific": { 00:15:26.145 "raid": { 00:15:26.145 "uuid": "dddb1718-ceea-4b02-85ed-c6ba54e78474", 00:15:26.145 "strip_size_kb": 64, 00:15:26.145 "state": "online", 00:15:26.145 "raid_level": "raid0", 00:15:26.145 "superblock": true, 00:15:26.145 "num_base_bdevs": 4, 00:15:26.145 "num_base_bdevs_discovered": 4, 00:15:26.145 "num_base_bdevs_operational": 4, 00:15:26.145 "base_bdevs_list": [ 00:15:26.145 { 00:15:26.145 "name": "pt1", 00:15:26.145 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:26.145 "is_configured": true, 00:15:26.145 "data_offset": 2048, 00:15:26.145 "data_size": 63488 00:15:26.145 }, 00:15:26.145 { 00:15:26.145 "name": "pt2", 00:15:26.145 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:26.145 "is_configured": true, 00:15:26.145 "data_offset": 2048, 00:15:26.145 "data_size": 63488 00:15:26.145 }, 00:15:26.145 { 00:15:26.145 "name": "pt3", 00:15:26.145 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:26.145 "is_configured": true, 00:15:26.145 "data_offset": 2048, 00:15:26.145 "data_size": 63488 00:15:26.145 }, 00:15:26.145 { 00:15:26.145 "name": "pt4", 00:15:26.145 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:26.145 "is_configured": true, 00:15:26.145 "data_offset": 2048, 00:15:26.145 "data_size": 63488 00:15:26.145 } 00:15:26.145 ] 00:15:26.145 } 00:15:26.145 } 00:15:26.145 }' 00:15:26.145 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:26.145 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:26.145 pt2 00:15:26.145 pt3 00:15:26.145 pt4' 00:15:26.145 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.145 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:26.145 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.145 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:26.145 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.145 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.145 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.145 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.145 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.145 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.145 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.145 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:26.145 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.145 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.145 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.403 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.403 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.403 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.403 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.403 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:26.403 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.403 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.403 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.403 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.403 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.403 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.403 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.404 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:26.404 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.404 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.404 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.404 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.404 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.404 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.404 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:26.404 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:26.404 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.404 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.404 [2024-12-06 06:41:44.952783] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:26.404 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.404 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=dddb1718-ceea-4b02-85ed-c6ba54e78474 00:15:26.404 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z dddb1718-ceea-4b02-85ed-c6ba54e78474 ']' 00:15:26.404 06:41:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:26.404 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.404 06:41:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.404 [2024-12-06 06:41:44.996398] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:26.404 [2024-12-06 06:41:44.996585] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:26.404 [2024-12-06 06:41:44.996834] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:26.404 [2024-12-06 06:41:44.997043] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:26.404 [2024-12-06 06:41:44.997227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:26.404 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.404 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.404 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:26.404 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.404 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.404 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.662 [2024-12-06 06:41:45.144442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:26.662 [2024-12-06 06:41:45.146952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:26.662 [2024-12-06 06:41:45.147022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:26.662 [2024-12-06 06:41:45.147076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:26.662 [2024-12-06 06:41:45.147151] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:26.662 [2024-12-06 06:41:45.147224] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:26.662 [2024-12-06 06:41:45.147257] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:26.662 [2024-12-06 06:41:45.147287] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:26.662 [2024-12-06 06:41:45.147309] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:26.662 [2024-12-06 06:41:45.147328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:26.662 request: 00:15:26.662 { 00:15:26.662 "name": "raid_bdev1", 00:15:26.662 "raid_level": "raid0", 00:15:26.662 "base_bdevs": [ 00:15:26.662 "malloc1", 00:15:26.662 "malloc2", 00:15:26.662 "malloc3", 00:15:26.662 "malloc4" 00:15:26.662 ], 00:15:26.662 "strip_size_kb": 64, 00:15:26.662 "superblock": false, 00:15:26.662 "method": "bdev_raid_create", 00:15:26.662 "req_id": 1 00:15:26.662 } 00:15:26.662 Got JSON-RPC error response 00:15:26.662 response: 00:15:26.662 { 00:15:26.662 "code": -17, 00:15:26.662 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:26.662 } 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.662 [2024-12-06 06:41:45.212403] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:26.662 [2024-12-06 06:41:45.212662] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.662 [2024-12-06 06:41:45.212705] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:26.662 [2024-12-06 06:41:45.212724] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.662 [2024-12-06 06:41:45.215810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.662 [2024-12-06 06:41:45.215861] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:26.662 [2024-12-06 06:41:45.216008] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:26.662 [2024-12-06 06:41:45.216076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:26.662 pt1 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.662 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:26.663 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.663 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.663 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.663 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.663 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.663 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.663 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.663 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.663 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.663 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.663 "name": "raid_bdev1", 00:15:26.663 "uuid": "dddb1718-ceea-4b02-85ed-c6ba54e78474", 00:15:26.663 "strip_size_kb": 64, 00:15:26.663 "state": "configuring", 00:15:26.663 "raid_level": "raid0", 00:15:26.663 "superblock": true, 00:15:26.663 "num_base_bdevs": 4, 00:15:26.663 "num_base_bdevs_discovered": 1, 00:15:26.663 "num_base_bdevs_operational": 4, 00:15:26.663 "base_bdevs_list": [ 00:15:26.663 { 00:15:26.663 "name": "pt1", 00:15:26.663 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:26.663 "is_configured": true, 00:15:26.663 "data_offset": 2048, 00:15:26.663 "data_size": 63488 00:15:26.663 }, 00:15:26.663 { 00:15:26.663 "name": null, 00:15:26.663 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:26.663 "is_configured": false, 00:15:26.663 "data_offset": 2048, 00:15:26.663 "data_size": 63488 00:15:26.663 }, 00:15:26.663 { 00:15:26.663 "name": null, 00:15:26.663 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:26.663 "is_configured": false, 00:15:26.663 "data_offset": 2048, 00:15:26.663 "data_size": 63488 00:15:26.663 }, 00:15:26.663 { 00:15:26.663 "name": null, 00:15:26.663 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:26.663 "is_configured": false, 00:15:26.663 "data_offset": 2048, 00:15:26.663 "data_size": 63488 00:15:26.663 } 00:15:26.663 ] 00:15:26.663 }' 00:15:26.663 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.663 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.228 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:27.228 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:27.228 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.228 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.228 [2024-12-06 06:41:45.720583] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:27.228 [2024-12-06 06:41:45.720674] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.228 [2024-12-06 06:41:45.720704] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:27.228 [2024-12-06 06:41:45.720722] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.228 [2024-12-06 06:41:45.721326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.228 [2024-12-06 06:41:45.721405] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:27.228 [2024-12-06 06:41:45.721587] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:27.228 [2024-12-06 06:41:45.721719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:27.228 pt2 00:15:27.229 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.229 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:27.229 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.229 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.229 [2024-12-06 06:41:45.728573] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:27.229 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.229 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:15:27.229 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.229 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.229 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:27.229 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.229 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:27.229 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.229 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.229 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.229 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.229 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.229 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.229 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.229 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.229 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.229 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.229 "name": "raid_bdev1", 00:15:27.229 "uuid": "dddb1718-ceea-4b02-85ed-c6ba54e78474", 00:15:27.229 "strip_size_kb": 64, 00:15:27.229 "state": "configuring", 00:15:27.229 "raid_level": "raid0", 00:15:27.229 "superblock": true, 00:15:27.229 "num_base_bdevs": 4, 00:15:27.229 "num_base_bdevs_discovered": 1, 00:15:27.229 "num_base_bdevs_operational": 4, 00:15:27.229 "base_bdevs_list": [ 00:15:27.229 { 00:15:27.229 "name": "pt1", 00:15:27.229 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:27.229 "is_configured": true, 00:15:27.229 "data_offset": 2048, 00:15:27.229 "data_size": 63488 00:15:27.229 }, 00:15:27.229 { 00:15:27.229 "name": null, 00:15:27.229 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:27.229 "is_configured": false, 00:15:27.229 "data_offset": 0, 00:15:27.229 "data_size": 63488 00:15:27.229 }, 00:15:27.229 { 00:15:27.229 "name": null, 00:15:27.229 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:27.229 "is_configured": false, 00:15:27.229 "data_offset": 2048, 00:15:27.229 "data_size": 63488 00:15:27.229 }, 00:15:27.229 { 00:15:27.229 "name": null, 00:15:27.229 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:27.229 "is_configured": false, 00:15:27.229 "data_offset": 2048, 00:15:27.229 "data_size": 63488 00:15:27.229 } 00:15:27.229 ] 00:15:27.229 }' 00:15:27.229 06:41:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.229 06:41:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.795 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:27.795 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:27.795 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:27.795 06:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.795 06:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.795 [2024-12-06 06:41:46.272729] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:27.795 [2024-12-06 06:41:46.272809] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.795 [2024-12-06 06:41:46.272841] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:27.795 [2024-12-06 06:41:46.272856] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.795 [2024-12-06 06:41:46.273498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.795 [2024-12-06 06:41:46.273524] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:27.795 [2024-12-06 06:41:46.273645] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:27.795 [2024-12-06 06:41:46.273679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:27.795 pt2 00:15:27.795 06:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.795 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:27.795 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:27.795 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:27.795 06:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.795 06:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.796 [2024-12-06 06:41:46.280707] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:27.796 [2024-12-06 06:41:46.280763] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.796 [2024-12-06 06:41:46.280790] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:27.796 [2024-12-06 06:41:46.280803] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.796 [2024-12-06 06:41:46.281311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.796 [2024-12-06 06:41:46.281351] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:27.796 [2024-12-06 06:41:46.281471] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:27.796 [2024-12-06 06:41:46.281509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:27.796 pt3 00:15:27.796 06:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.796 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:27.796 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:27.796 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:27.796 06:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.796 06:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.796 [2024-12-06 06:41:46.288680] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:27.796 [2024-12-06 06:41:46.288744] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.796 [2024-12-06 06:41:46.288768] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:27.796 [2024-12-06 06:41:46.288782] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.796 [2024-12-06 06:41:46.289274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.796 [2024-12-06 06:41:46.289320] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:27.796 [2024-12-06 06:41:46.289406] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:27.796 [2024-12-06 06:41:46.289468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:27.796 [2024-12-06 06:41:46.289687] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:27.796 [2024-12-06 06:41:46.289710] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:27.796 [2024-12-06 06:41:46.290036] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:27.796 [2024-12-06 06:41:46.290231] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:27.796 [2024-12-06 06:41:46.290253] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:27.796 [2024-12-06 06:41:46.290401] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.796 pt4 00:15:27.796 06:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.796 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:27.796 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:27.796 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:27.796 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.796 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.796 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:27.796 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.796 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:27.796 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.796 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.796 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.796 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.796 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.796 06:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.796 06:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.796 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.796 06:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.796 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.796 "name": "raid_bdev1", 00:15:27.796 "uuid": "dddb1718-ceea-4b02-85ed-c6ba54e78474", 00:15:27.796 "strip_size_kb": 64, 00:15:27.796 "state": "online", 00:15:27.796 "raid_level": "raid0", 00:15:27.796 "superblock": true, 00:15:27.796 "num_base_bdevs": 4, 00:15:27.796 "num_base_bdevs_discovered": 4, 00:15:27.796 "num_base_bdevs_operational": 4, 00:15:27.796 "base_bdevs_list": [ 00:15:27.796 { 00:15:27.796 "name": "pt1", 00:15:27.796 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:27.796 "is_configured": true, 00:15:27.796 "data_offset": 2048, 00:15:27.796 "data_size": 63488 00:15:27.796 }, 00:15:27.796 { 00:15:27.796 "name": "pt2", 00:15:27.796 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:27.796 "is_configured": true, 00:15:27.796 "data_offset": 2048, 00:15:27.796 "data_size": 63488 00:15:27.796 }, 00:15:27.796 { 00:15:27.796 "name": "pt3", 00:15:27.796 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:27.796 "is_configured": true, 00:15:27.796 "data_offset": 2048, 00:15:27.796 "data_size": 63488 00:15:27.796 }, 00:15:27.796 { 00:15:27.796 "name": "pt4", 00:15:27.796 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:27.796 "is_configured": true, 00:15:27.796 "data_offset": 2048, 00:15:27.796 "data_size": 63488 00:15:27.796 } 00:15:27.796 ] 00:15:27.796 }' 00:15:27.796 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.796 06:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.362 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:28.362 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:28.362 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:28.362 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:28.362 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:28.362 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:28.362 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:28.362 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:28.362 06:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.362 06:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.362 [2024-12-06 06:41:46.781259] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:28.362 06:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.362 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:28.362 "name": "raid_bdev1", 00:15:28.362 "aliases": [ 00:15:28.362 "dddb1718-ceea-4b02-85ed-c6ba54e78474" 00:15:28.362 ], 00:15:28.362 "product_name": "Raid Volume", 00:15:28.362 "block_size": 512, 00:15:28.362 "num_blocks": 253952, 00:15:28.362 "uuid": "dddb1718-ceea-4b02-85ed-c6ba54e78474", 00:15:28.362 "assigned_rate_limits": { 00:15:28.362 "rw_ios_per_sec": 0, 00:15:28.362 "rw_mbytes_per_sec": 0, 00:15:28.362 "r_mbytes_per_sec": 0, 00:15:28.362 "w_mbytes_per_sec": 0 00:15:28.362 }, 00:15:28.362 "claimed": false, 00:15:28.362 "zoned": false, 00:15:28.362 "supported_io_types": { 00:15:28.362 "read": true, 00:15:28.362 "write": true, 00:15:28.362 "unmap": true, 00:15:28.362 "flush": true, 00:15:28.362 "reset": true, 00:15:28.362 "nvme_admin": false, 00:15:28.362 "nvme_io": false, 00:15:28.362 "nvme_io_md": false, 00:15:28.362 "write_zeroes": true, 00:15:28.362 "zcopy": false, 00:15:28.362 "get_zone_info": false, 00:15:28.362 "zone_management": false, 00:15:28.362 "zone_append": false, 00:15:28.362 "compare": false, 00:15:28.362 "compare_and_write": false, 00:15:28.362 "abort": false, 00:15:28.362 "seek_hole": false, 00:15:28.362 "seek_data": false, 00:15:28.362 "copy": false, 00:15:28.362 "nvme_iov_md": false 00:15:28.362 }, 00:15:28.362 "memory_domains": [ 00:15:28.362 { 00:15:28.362 "dma_device_id": "system", 00:15:28.362 "dma_device_type": 1 00:15:28.362 }, 00:15:28.362 { 00:15:28.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.362 "dma_device_type": 2 00:15:28.362 }, 00:15:28.362 { 00:15:28.362 "dma_device_id": "system", 00:15:28.362 "dma_device_type": 1 00:15:28.362 }, 00:15:28.362 { 00:15:28.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.362 "dma_device_type": 2 00:15:28.363 }, 00:15:28.363 { 00:15:28.363 "dma_device_id": "system", 00:15:28.363 "dma_device_type": 1 00:15:28.363 }, 00:15:28.363 { 00:15:28.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.363 "dma_device_type": 2 00:15:28.363 }, 00:15:28.363 { 00:15:28.363 "dma_device_id": "system", 00:15:28.363 "dma_device_type": 1 00:15:28.363 }, 00:15:28.363 { 00:15:28.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.363 "dma_device_type": 2 00:15:28.363 } 00:15:28.363 ], 00:15:28.363 "driver_specific": { 00:15:28.363 "raid": { 00:15:28.363 "uuid": "dddb1718-ceea-4b02-85ed-c6ba54e78474", 00:15:28.363 "strip_size_kb": 64, 00:15:28.363 "state": "online", 00:15:28.363 "raid_level": "raid0", 00:15:28.363 "superblock": true, 00:15:28.363 "num_base_bdevs": 4, 00:15:28.363 "num_base_bdevs_discovered": 4, 00:15:28.363 "num_base_bdevs_operational": 4, 00:15:28.363 "base_bdevs_list": [ 00:15:28.363 { 00:15:28.363 "name": "pt1", 00:15:28.363 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:28.363 "is_configured": true, 00:15:28.363 "data_offset": 2048, 00:15:28.363 "data_size": 63488 00:15:28.363 }, 00:15:28.363 { 00:15:28.363 "name": "pt2", 00:15:28.363 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:28.363 "is_configured": true, 00:15:28.363 "data_offset": 2048, 00:15:28.363 "data_size": 63488 00:15:28.363 }, 00:15:28.363 { 00:15:28.363 "name": "pt3", 00:15:28.363 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:28.363 "is_configured": true, 00:15:28.363 "data_offset": 2048, 00:15:28.363 "data_size": 63488 00:15:28.363 }, 00:15:28.363 { 00:15:28.363 "name": "pt4", 00:15:28.363 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:28.363 "is_configured": true, 00:15:28.363 "data_offset": 2048, 00:15:28.363 "data_size": 63488 00:15:28.363 } 00:15:28.363 ] 00:15:28.363 } 00:15:28.363 } 00:15:28.363 }' 00:15:28.363 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:28.363 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:28.363 pt2 00:15:28.363 pt3 00:15:28.363 pt4' 00:15:28.363 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.363 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:28.363 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.363 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:28.363 06:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.363 06:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.363 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.363 06:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.363 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.363 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.363 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.363 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:28.363 06:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.363 06:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.363 06:41:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.363 06:41:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:28.623 [2024-12-06 06:41:47.141319] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' dddb1718-ceea-4b02-85ed-c6ba54e78474 '!=' dddb1718-ceea-4b02-85ed-c6ba54e78474 ']' 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 71020 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 71020 ']' 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 71020 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71020 00:15:28.623 killing process with pid 71020 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71020' 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 71020 00:15:28.623 [2024-12-06 06:41:47.221339] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:28.623 06:41:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 71020 00:15:28.623 [2024-12-06 06:41:47.221473] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:28.623 [2024-12-06 06:41:47.221605] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:28.623 [2024-12-06 06:41:47.221623] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:29.191 [2024-12-06 06:41:47.579825] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:30.126 06:41:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:30.126 00:15:30.126 real 0m5.833s 00:15:30.126 user 0m8.774s 00:15:30.126 sys 0m0.844s 00:15:30.126 06:41:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:30.126 ************************************ 00:15:30.126 END TEST raid_superblock_test 00:15:30.126 ************************************ 00:15:30.126 06:41:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.126 06:41:48 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:15:30.126 06:41:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:30.126 06:41:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:30.126 06:41:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:30.126 ************************************ 00:15:30.126 START TEST raid_read_error_test 00:15:30.126 ************************************ 00:15:30.126 06:41:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:15:30.126 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:15:30.126 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:30.126 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:30.126 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:30.126 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:30.126 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:30.126 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:30.126 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:30.126 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:30.126 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:30.126 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:30.126 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:30.126 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:30.126 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:30.126 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:30.126 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:30.127 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:30.127 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:30.127 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:30.127 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:30.127 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:30.127 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:30.127 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:30.127 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:30.127 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:15:30.127 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:30.127 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:30.127 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:30.127 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6vbJqGxzon 00:15:30.127 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71284 00:15:30.127 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71284 00:15:30.127 06:41:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:30.127 06:41:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71284 ']' 00:15:30.127 06:41:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.127 06:41:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:30.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.127 06:41:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.127 06:41:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:30.127 06:41:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.385 [2024-12-06 06:41:48.782679] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:15:30.385 [2024-12-06 06:41:48.782834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71284 ] 00:15:30.385 [2024-12-06 06:41:48.963318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.643 [2024-12-06 06:41:49.118462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.902 [2024-12-06 06:41:49.352197] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:30.902 [2024-12-06 06:41:49.352269] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.159 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:31.159 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:31.159 06:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:31.159 06:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:31.159 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.159 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.417 BaseBdev1_malloc 00:15:31.417 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.417 06:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:31.417 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.417 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.417 true 00:15:31.417 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.417 06:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:31.417 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.417 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.417 [2024-12-06 06:41:49.829972] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:31.417 [2024-12-06 06:41:49.830041] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.417 [2024-12-06 06:41:49.830071] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:31.417 [2024-12-06 06:41:49.830089] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.417 [2024-12-06 06:41:49.832867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.417 [2024-12-06 06:41:49.832917] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:31.417 BaseBdev1 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.418 BaseBdev2_malloc 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.418 true 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.418 [2024-12-06 06:41:49.886455] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:31.418 [2024-12-06 06:41:49.886675] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.418 [2024-12-06 06:41:49.886711] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:31.418 [2024-12-06 06:41:49.886731] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.418 [2024-12-06 06:41:49.889469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.418 [2024-12-06 06:41:49.889519] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:31.418 BaseBdev2 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.418 BaseBdev3_malloc 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.418 true 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.418 [2024-12-06 06:41:49.958539] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:31.418 [2024-12-06 06:41:49.958606] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.418 [2024-12-06 06:41:49.958633] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:31.418 [2024-12-06 06:41:49.958651] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.418 [2024-12-06 06:41:49.961401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.418 [2024-12-06 06:41:49.961624] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:31.418 BaseBdev3 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.418 06:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.418 BaseBdev4_malloc 00:15:31.418 06:41:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.418 06:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:31.418 06:41:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.418 06:41:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.418 true 00:15:31.418 06:41:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.418 06:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:31.418 06:41:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.418 06:41:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.418 [2024-12-06 06:41:50.015830] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:31.418 [2024-12-06 06:41:50.016017] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.418 [2024-12-06 06:41:50.016055] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:31.418 [2024-12-06 06:41:50.016075] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.418 [2024-12-06 06:41:50.018853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.418 [2024-12-06 06:41:50.018906] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:31.418 BaseBdev4 00:15:31.418 06:41:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.418 06:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:31.418 06:41:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.418 06:41:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.418 [2024-12-06 06:41:50.023914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:31.418 [2024-12-06 06:41:50.026303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:31.418 [2024-12-06 06:41:50.026556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:31.418 [2024-12-06 06:41:50.026668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:31.418 [2024-12-06 06:41:50.026958] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:31.418 [2024-12-06 06:41:50.026986] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:31.418 [2024-12-06 06:41:50.027288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:31.418 [2024-12-06 06:41:50.027498] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:31.418 [2024-12-06 06:41:50.027516] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:31.418 [2024-12-06 06:41:50.027736] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.418 06:41:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.418 06:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:31.418 06:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.418 06:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.418 06:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:31.418 06:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.418 06:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:31.418 06:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.418 06:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.418 06:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.418 06:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.418 06:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.418 06:41:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.418 06:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.418 06:41:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.418 06:41:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.677 06:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.677 "name": "raid_bdev1", 00:15:31.677 "uuid": "adacd810-585e-425b-aead-51da1be4b5fb", 00:15:31.677 "strip_size_kb": 64, 00:15:31.677 "state": "online", 00:15:31.677 "raid_level": "raid0", 00:15:31.677 "superblock": true, 00:15:31.677 "num_base_bdevs": 4, 00:15:31.677 "num_base_bdevs_discovered": 4, 00:15:31.677 "num_base_bdevs_operational": 4, 00:15:31.677 "base_bdevs_list": [ 00:15:31.677 { 00:15:31.677 "name": "BaseBdev1", 00:15:31.677 "uuid": "de4be2d1-6dc2-51f9-af66-67245fdfc6a6", 00:15:31.677 "is_configured": true, 00:15:31.677 "data_offset": 2048, 00:15:31.677 "data_size": 63488 00:15:31.677 }, 00:15:31.677 { 00:15:31.677 "name": "BaseBdev2", 00:15:31.677 "uuid": "4e600456-e2d4-55d6-85a0-2b50ffd928b8", 00:15:31.677 "is_configured": true, 00:15:31.677 "data_offset": 2048, 00:15:31.677 "data_size": 63488 00:15:31.677 }, 00:15:31.677 { 00:15:31.677 "name": "BaseBdev3", 00:15:31.677 "uuid": "69ee0b73-c203-5f1b-a005-172820397b2f", 00:15:31.677 "is_configured": true, 00:15:31.677 "data_offset": 2048, 00:15:31.677 "data_size": 63488 00:15:31.677 }, 00:15:31.677 { 00:15:31.677 "name": "BaseBdev4", 00:15:31.677 "uuid": "8fcb2a77-6167-5c57-99ab-8d4acd42b63e", 00:15:31.677 "is_configured": true, 00:15:31.677 "data_offset": 2048, 00:15:31.677 "data_size": 63488 00:15:31.677 } 00:15:31.677 ] 00:15:31.677 }' 00:15:31.677 06:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.677 06:41:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.937 06:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:31.937 06:41:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:32.203 [2024-12-06 06:41:50.653398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:33.154 06:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:33.154 06:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.154 06:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.154 06:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.154 06:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:33.154 06:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:33.154 06:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:15:33.154 06:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:33.154 06:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.154 06:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.154 06:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:33.154 06:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.154 06:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:33.154 06:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.154 06:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.154 06:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.154 06:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.154 06:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.154 06:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.154 06:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.154 06:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.154 06:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.154 06:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.154 "name": "raid_bdev1", 00:15:33.154 "uuid": "adacd810-585e-425b-aead-51da1be4b5fb", 00:15:33.154 "strip_size_kb": 64, 00:15:33.154 "state": "online", 00:15:33.154 "raid_level": "raid0", 00:15:33.154 "superblock": true, 00:15:33.154 "num_base_bdevs": 4, 00:15:33.154 "num_base_bdevs_discovered": 4, 00:15:33.154 "num_base_bdevs_operational": 4, 00:15:33.154 "base_bdevs_list": [ 00:15:33.154 { 00:15:33.154 "name": "BaseBdev1", 00:15:33.154 "uuid": "de4be2d1-6dc2-51f9-af66-67245fdfc6a6", 00:15:33.154 "is_configured": true, 00:15:33.154 "data_offset": 2048, 00:15:33.154 "data_size": 63488 00:15:33.154 }, 00:15:33.154 { 00:15:33.154 "name": "BaseBdev2", 00:15:33.154 "uuid": "4e600456-e2d4-55d6-85a0-2b50ffd928b8", 00:15:33.154 "is_configured": true, 00:15:33.154 "data_offset": 2048, 00:15:33.154 "data_size": 63488 00:15:33.154 }, 00:15:33.154 { 00:15:33.154 "name": "BaseBdev3", 00:15:33.154 "uuid": "69ee0b73-c203-5f1b-a005-172820397b2f", 00:15:33.154 "is_configured": true, 00:15:33.154 "data_offset": 2048, 00:15:33.154 "data_size": 63488 00:15:33.154 }, 00:15:33.154 { 00:15:33.154 "name": "BaseBdev4", 00:15:33.154 "uuid": "8fcb2a77-6167-5c57-99ab-8d4acd42b63e", 00:15:33.154 "is_configured": true, 00:15:33.154 "data_offset": 2048, 00:15:33.154 "data_size": 63488 00:15:33.154 } 00:15:33.154 ] 00:15:33.154 }' 00:15:33.154 06:41:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.154 06:41:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.720 06:41:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:33.720 06:41:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.720 06:41:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.720 [2024-12-06 06:41:52.080656] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:33.720 [2024-12-06 06:41:52.080693] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:33.720 [2024-12-06 06:41:52.084184] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:33.720 [2024-12-06 06:41:52.084255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.720 [2024-12-06 06:41:52.084312] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:33.720 [2024-12-06 06:41:52.084330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:33.720 { 00:15:33.720 "results": [ 00:15:33.720 { 00:15:33.720 "job": "raid_bdev1", 00:15:33.720 "core_mask": "0x1", 00:15:33.720 "workload": "randrw", 00:15:33.720 "percentage": 50, 00:15:33.720 "status": "finished", 00:15:33.720 "queue_depth": 1, 00:15:33.720 "io_size": 131072, 00:15:33.720 "runtime": 1.424887, 00:15:33.720 "iops": 10581.891757030557, 00:15:33.720 "mibps": 1322.7364696288196, 00:15:33.720 "io_failed": 1, 00:15:33.720 "io_timeout": 0, 00:15:33.720 "avg_latency_us": 131.55074619127146, 00:15:33.720 "min_latency_us": 38.63272727272727, 00:15:33.720 "max_latency_us": 1869.2654545454545 00:15:33.720 } 00:15:33.720 ], 00:15:33.720 "core_count": 1 00:15:33.720 } 00:15:33.720 06:41:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.720 06:41:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71284 00:15:33.720 06:41:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71284 ']' 00:15:33.720 06:41:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71284 00:15:33.720 06:41:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:15:33.720 06:41:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:33.720 06:41:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71284 00:15:33.720 killing process with pid 71284 00:15:33.720 06:41:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:33.720 06:41:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:33.720 06:41:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71284' 00:15:33.720 06:41:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71284 00:15:33.720 [2024-12-06 06:41:52.118306] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:33.720 06:41:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71284 00:15:33.977 [2024-12-06 06:41:52.409064] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:34.912 06:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6vbJqGxzon 00:15:34.912 06:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:34.912 06:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:34.912 ************************************ 00:15:34.912 END TEST raid_read_error_test 00:15:34.912 ************************************ 00:15:34.912 06:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:15:34.912 06:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:15:34.912 06:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:34.912 06:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:34.912 06:41:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:15:34.912 00:15:34.912 real 0m4.849s 00:15:34.912 user 0m5.963s 00:15:34.912 sys 0m0.600s 00:15:34.912 06:41:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:34.912 06:41:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.170 06:41:53 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:15:35.170 06:41:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:35.170 06:41:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:35.170 06:41:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:35.170 ************************************ 00:15:35.170 START TEST raid_write_error_test 00:15:35.170 ************************************ 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CZhq9kdOSv 00:15:35.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71430 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71430 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71430 ']' 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:35.170 06:41:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.170 [2024-12-06 06:41:53.695433] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:15:35.170 [2024-12-06 06:41:53.695650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71430 ] 00:15:35.428 [2024-12-06 06:41:53.878660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.428 [2024-12-06 06:41:54.009266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.686 [2024-12-06 06:41:54.220348] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:35.686 [2024-12-06 06:41:54.220411] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:36.254 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:36.254 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:36.254 06:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:36.254 06:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:36.254 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.254 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.254 BaseBdev1_malloc 00:15:36.254 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.254 06:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:36.254 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.254 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.254 true 00:15:36.254 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.254 06:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:36.254 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.254 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.255 [2024-12-06 06:41:54.763660] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:36.255 [2024-12-06 06:41:54.763729] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.255 [2024-12-06 06:41:54.763759] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:36.255 [2024-12-06 06:41:54.763777] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.255 [2024-12-06 06:41:54.766663] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.255 [2024-12-06 06:41:54.766716] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:36.255 BaseBdev1 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.255 BaseBdev2_malloc 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.255 true 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.255 [2024-12-06 06:41:54.823541] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:36.255 [2024-12-06 06:41:54.823619] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.255 [2024-12-06 06:41:54.823644] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:36.255 [2024-12-06 06:41:54.823661] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.255 [2024-12-06 06:41:54.826461] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.255 [2024-12-06 06:41:54.826673] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:36.255 BaseBdev2 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.255 BaseBdev3_malloc 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.255 true 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.255 [2024-12-06 06:41:54.892141] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:36.255 [2024-12-06 06:41:54.892209] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.255 [2024-12-06 06:41:54.892236] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:36.255 [2024-12-06 06:41:54.892253] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.255 [2024-12-06 06:41:54.895029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.255 [2024-12-06 06:41:54.895078] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:36.255 BaseBdev3 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.255 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.524 BaseBdev4_malloc 00:15:36.524 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.524 06:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:36.524 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.524 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.524 true 00:15:36.524 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.525 06:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:36.525 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.525 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.525 [2024-12-06 06:41:54.948497] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:36.525 [2024-12-06 06:41:54.948581] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.525 [2024-12-06 06:41:54.948610] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:36.525 [2024-12-06 06:41:54.948626] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.525 [2024-12-06 06:41:54.951394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.525 [2024-12-06 06:41:54.951449] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:36.525 BaseBdev4 00:15:36.525 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.525 06:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:36.525 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.525 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.525 [2024-12-06 06:41:54.956593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:36.525 [2024-12-06 06:41:54.959000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:36.525 [2024-12-06 06:41:54.959104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:36.525 [2024-12-06 06:41:54.959199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:36.525 [2024-12-06 06:41:54.959490] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:36.525 [2024-12-06 06:41:54.959517] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:36.525 [2024-12-06 06:41:54.959845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:36.525 [2024-12-06 06:41:54.960057] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:36.525 [2024-12-06 06:41:54.960082] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:36.525 [2024-12-06 06:41:54.960284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.525 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.525 06:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:36.525 06:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.525 06:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.525 06:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:36.525 06:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.525 06:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:36.525 06:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.525 06:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.526 06:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.526 06:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.526 06:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.526 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.526 06:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.526 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.526 06:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.526 06:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.526 "name": "raid_bdev1", 00:15:36.526 "uuid": "7d6a7c25-3598-4c98-ab94-b1865569ba6a", 00:15:36.526 "strip_size_kb": 64, 00:15:36.526 "state": "online", 00:15:36.526 "raid_level": "raid0", 00:15:36.526 "superblock": true, 00:15:36.526 "num_base_bdevs": 4, 00:15:36.526 "num_base_bdevs_discovered": 4, 00:15:36.526 "num_base_bdevs_operational": 4, 00:15:36.526 "base_bdevs_list": [ 00:15:36.526 { 00:15:36.526 "name": "BaseBdev1", 00:15:36.526 "uuid": "2ffb0150-7827-5d53-bb47-f92b680b6b3c", 00:15:36.526 "is_configured": true, 00:15:36.526 "data_offset": 2048, 00:15:36.526 "data_size": 63488 00:15:36.526 }, 00:15:36.526 { 00:15:36.526 "name": "BaseBdev2", 00:15:36.526 "uuid": "65c26b5a-b36c-5fa4-9311-bb4666776987", 00:15:36.526 "is_configured": true, 00:15:36.526 "data_offset": 2048, 00:15:36.526 "data_size": 63488 00:15:36.526 }, 00:15:36.526 { 00:15:36.526 "name": "BaseBdev3", 00:15:36.526 "uuid": "c1b47097-3d2c-5f6a-ab07-362bc502f689", 00:15:36.527 "is_configured": true, 00:15:36.527 "data_offset": 2048, 00:15:36.527 "data_size": 63488 00:15:36.527 }, 00:15:36.527 { 00:15:36.527 "name": "BaseBdev4", 00:15:36.527 "uuid": "267277e0-04e4-5c9e-b091-43ccf2b37a53", 00:15:36.527 "is_configured": true, 00:15:36.527 "data_offset": 2048, 00:15:36.527 "data_size": 63488 00:15:36.527 } 00:15:36.527 ] 00:15:36.527 }' 00:15:36.527 06:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.527 06:41:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.094 06:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:37.094 06:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:37.094 [2024-12-06 06:41:55.558164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:38.027 06:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:38.027 06:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.027 06:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.027 06:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.027 06:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:38.027 06:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:38.027 06:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:15:38.027 06:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:38.027 06:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.027 06:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.027 06:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:38.027 06:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.027 06:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:38.027 06:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.027 06:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.027 06:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.027 06:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.027 06:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.027 06:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.027 06:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.027 06:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.027 06:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.027 06:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.027 "name": "raid_bdev1", 00:15:38.027 "uuid": "7d6a7c25-3598-4c98-ab94-b1865569ba6a", 00:15:38.027 "strip_size_kb": 64, 00:15:38.027 "state": "online", 00:15:38.027 "raid_level": "raid0", 00:15:38.028 "superblock": true, 00:15:38.028 "num_base_bdevs": 4, 00:15:38.028 "num_base_bdevs_discovered": 4, 00:15:38.028 "num_base_bdevs_operational": 4, 00:15:38.028 "base_bdevs_list": [ 00:15:38.028 { 00:15:38.028 "name": "BaseBdev1", 00:15:38.028 "uuid": "2ffb0150-7827-5d53-bb47-f92b680b6b3c", 00:15:38.028 "is_configured": true, 00:15:38.028 "data_offset": 2048, 00:15:38.028 "data_size": 63488 00:15:38.028 }, 00:15:38.028 { 00:15:38.028 "name": "BaseBdev2", 00:15:38.028 "uuid": "65c26b5a-b36c-5fa4-9311-bb4666776987", 00:15:38.028 "is_configured": true, 00:15:38.028 "data_offset": 2048, 00:15:38.028 "data_size": 63488 00:15:38.028 }, 00:15:38.028 { 00:15:38.028 "name": "BaseBdev3", 00:15:38.028 "uuid": "c1b47097-3d2c-5f6a-ab07-362bc502f689", 00:15:38.028 "is_configured": true, 00:15:38.028 "data_offset": 2048, 00:15:38.028 "data_size": 63488 00:15:38.028 }, 00:15:38.028 { 00:15:38.028 "name": "BaseBdev4", 00:15:38.028 "uuid": "267277e0-04e4-5c9e-b091-43ccf2b37a53", 00:15:38.028 "is_configured": true, 00:15:38.028 "data_offset": 2048, 00:15:38.028 "data_size": 63488 00:15:38.028 } 00:15:38.028 ] 00:15:38.028 }' 00:15:38.028 06:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.028 06:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.594 06:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:38.594 06:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.594 06:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.594 [2024-12-06 06:41:56.973670] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:38.594 [2024-12-06 06:41:56.973851] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:38.594 [2024-12-06 06:41:56.977375] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:38.594 [2024-12-06 06:41:56.977602] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.594 [2024-12-06 06:41:56.977677] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:38.594 [2024-12-06 06:41:56.977697] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:38.594 { 00:15:38.594 "results": [ 00:15:38.594 { 00:15:38.594 "job": "raid_bdev1", 00:15:38.594 "core_mask": "0x1", 00:15:38.594 "workload": "randrw", 00:15:38.594 "percentage": 50, 00:15:38.594 "status": "finished", 00:15:38.594 "queue_depth": 1, 00:15:38.594 "io_size": 131072, 00:15:38.595 "runtime": 1.413123, 00:15:38.595 "iops": 10408.8603752115, 00:15:38.595 "mibps": 1301.1075469014374, 00:15:38.595 "io_failed": 1, 00:15:38.595 "io_timeout": 0, 00:15:38.595 "avg_latency_us": 133.93954489833754, 00:15:38.595 "min_latency_us": 39.56363636363636, 00:15:38.595 "max_latency_us": 1839.4763636363637 00:15:38.595 } 00:15:38.595 ], 00:15:38.595 "core_count": 1 00:15:38.595 } 00:15:38.595 06:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.595 06:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71430 00:15:38.595 06:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71430 ']' 00:15:38.595 06:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71430 00:15:38.595 06:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:15:38.595 06:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:38.595 06:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71430 00:15:38.595 killing process with pid 71430 00:15:38.595 06:41:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:38.595 06:41:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:38.595 06:41:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71430' 00:15:38.595 06:41:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71430 00:15:38.595 [2024-12-06 06:41:57.014475] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:38.595 06:41:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71430 00:15:38.852 [2024-12-06 06:41:57.306262] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:39.796 06:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:39.796 06:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CZhq9kdOSv 00:15:39.796 06:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:40.078 06:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:15:40.078 06:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:15:40.078 06:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:40.078 06:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:40.078 06:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:15:40.078 00:15:40.078 real 0m4.861s 00:15:40.078 user 0m5.952s 00:15:40.078 sys 0m0.609s 00:15:40.078 06:41:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:40.078 06:41:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.078 ************************************ 00:15:40.078 END TEST raid_write_error_test 00:15:40.078 ************************************ 00:15:40.078 06:41:58 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:15:40.078 06:41:58 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:15:40.078 06:41:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:40.078 06:41:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:40.078 06:41:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:40.078 ************************************ 00:15:40.078 START TEST raid_state_function_test 00:15:40.078 ************************************ 00:15:40.078 06:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:15:40.078 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:15:40.078 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:40.078 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:40.078 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:40.078 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:40.078 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:40.078 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:40.078 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:40.078 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:40.078 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:40.078 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:40.078 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:40.078 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:40.078 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:40.078 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:40.078 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:40.078 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:40.078 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:40.079 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:40.079 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:40.079 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:40.079 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:40.079 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:40.079 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:40.079 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:15:40.079 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:40.079 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:40.079 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:40.079 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:40.079 Process raid pid: 71574 00:15:40.079 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71574 00:15:40.079 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:40.079 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71574' 00:15:40.079 06:41:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71574 00:15:40.079 06:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71574 ']' 00:15:40.079 06:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.079 06:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:40.079 06:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.079 06:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:40.079 06:41:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.079 [2024-12-06 06:41:58.598716] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:15:40.079 [2024-12-06 06:41:58.599051] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.337 [2024-12-06 06:41:58.784861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.337 [2024-12-06 06:41:58.915899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.595 [2024-12-06 06:41:59.128917] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:40.595 [2024-12-06 06:41:59.128972] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:41.161 06:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:41.161 06:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:41.161 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:41.161 06:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.161 06:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.161 [2024-12-06 06:41:59.570070] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:41.161 [2024-12-06 06:41:59.570284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:41.161 [2024-12-06 06:41:59.570315] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:41.161 [2024-12-06 06:41:59.570334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:41.161 [2024-12-06 06:41:59.570345] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:41.161 [2024-12-06 06:41:59.570360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:41.161 [2024-12-06 06:41:59.570370] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:41.161 [2024-12-06 06:41:59.570385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:41.161 06:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.161 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:41.161 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.161 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.161 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:41.161 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.161 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.161 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.161 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.161 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.161 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.161 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.161 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.161 06:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.161 06:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.161 06:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.161 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.161 "name": "Existed_Raid", 00:15:41.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.161 "strip_size_kb": 64, 00:15:41.161 "state": "configuring", 00:15:41.161 "raid_level": "concat", 00:15:41.161 "superblock": false, 00:15:41.161 "num_base_bdevs": 4, 00:15:41.161 "num_base_bdevs_discovered": 0, 00:15:41.161 "num_base_bdevs_operational": 4, 00:15:41.161 "base_bdevs_list": [ 00:15:41.161 { 00:15:41.161 "name": "BaseBdev1", 00:15:41.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.161 "is_configured": false, 00:15:41.161 "data_offset": 0, 00:15:41.161 "data_size": 0 00:15:41.161 }, 00:15:41.161 { 00:15:41.161 "name": "BaseBdev2", 00:15:41.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.161 "is_configured": false, 00:15:41.161 "data_offset": 0, 00:15:41.161 "data_size": 0 00:15:41.161 }, 00:15:41.161 { 00:15:41.161 "name": "BaseBdev3", 00:15:41.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.161 "is_configured": false, 00:15:41.161 "data_offset": 0, 00:15:41.161 "data_size": 0 00:15:41.161 }, 00:15:41.161 { 00:15:41.161 "name": "BaseBdev4", 00:15:41.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.161 "is_configured": false, 00:15:41.161 "data_offset": 0, 00:15:41.161 "data_size": 0 00:15:41.161 } 00:15:41.161 ] 00:15:41.161 }' 00:15:41.161 06:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.161 06:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.420 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:41.420 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.420 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.420 [2024-12-06 06:42:00.042151] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:41.420 [2024-12-06 06:42:00.042338] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:41.420 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.420 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:41.420 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.420 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.420 [2024-12-06 06:42:00.050146] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:41.420 [2024-12-06 06:42:00.050201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:41.420 [2024-12-06 06:42:00.050217] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:41.420 [2024-12-06 06:42:00.050233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:41.420 [2024-12-06 06:42:00.050243] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:41.420 [2024-12-06 06:42:00.050258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:41.420 [2024-12-06 06:42:00.050268] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:41.420 [2024-12-06 06:42:00.050282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:41.420 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.420 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:41.420 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.420 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.678 [2024-12-06 06:42:00.095528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:41.678 BaseBdev1 00:15:41.678 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.678 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:41.678 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:41.678 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:41.678 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:41.678 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:41.678 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:41.678 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:41.678 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.678 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.678 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.679 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:41.679 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.679 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.679 [ 00:15:41.679 { 00:15:41.679 "name": "BaseBdev1", 00:15:41.679 "aliases": [ 00:15:41.679 "a4577bb7-0940-4076-8da2-1410375d0a64" 00:15:41.679 ], 00:15:41.679 "product_name": "Malloc disk", 00:15:41.679 "block_size": 512, 00:15:41.679 "num_blocks": 65536, 00:15:41.679 "uuid": "a4577bb7-0940-4076-8da2-1410375d0a64", 00:15:41.679 "assigned_rate_limits": { 00:15:41.679 "rw_ios_per_sec": 0, 00:15:41.679 "rw_mbytes_per_sec": 0, 00:15:41.679 "r_mbytes_per_sec": 0, 00:15:41.679 "w_mbytes_per_sec": 0 00:15:41.679 }, 00:15:41.679 "claimed": true, 00:15:41.679 "claim_type": "exclusive_write", 00:15:41.679 "zoned": false, 00:15:41.679 "supported_io_types": { 00:15:41.679 "read": true, 00:15:41.679 "write": true, 00:15:41.679 "unmap": true, 00:15:41.679 "flush": true, 00:15:41.679 "reset": true, 00:15:41.679 "nvme_admin": false, 00:15:41.679 "nvme_io": false, 00:15:41.679 "nvme_io_md": false, 00:15:41.679 "write_zeroes": true, 00:15:41.679 "zcopy": true, 00:15:41.679 "get_zone_info": false, 00:15:41.679 "zone_management": false, 00:15:41.679 "zone_append": false, 00:15:41.679 "compare": false, 00:15:41.679 "compare_and_write": false, 00:15:41.679 "abort": true, 00:15:41.679 "seek_hole": false, 00:15:41.679 "seek_data": false, 00:15:41.679 "copy": true, 00:15:41.679 "nvme_iov_md": false 00:15:41.679 }, 00:15:41.679 "memory_domains": [ 00:15:41.679 { 00:15:41.679 "dma_device_id": "system", 00:15:41.679 "dma_device_type": 1 00:15:41.679 }, 00:15:41.679 { 00:15:41.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.679 "dma_device_type": 2 00:15:41.679 } 00:15:41.679 ], 00:15:41.679 "driver_specific": {} 00:15:41.679 } 00:15:41.679 ] 00:15:41.679 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.679 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:41.679 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:41.679 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.679 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.679 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:41.679 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.679 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.679 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.679 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.679 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.679 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.679 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.679 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.679 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.679 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.679 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.679 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.679 "name": "Existed_Raid", 00:15:41.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.679 "strip_size_kb": 64, 00:15:41.679 "state": "configuring", 00:15:41.679 "raid_level": "concat", 00:15:41.679 "superblock": false, 00:15:41.679 "num_base_bdevs": 4, 00:15:41.679 "num_base_bdevs_discovered": 1, 00:15:41.679 "num_base_bdevs_operational": 4, 00:15:41.679 "base_bdevs_list": [ 00:15:41.679 { 00:15:41.679 "name": "BaseBdev1", 00:15:41.679 "uuid": "a4577bb7-0940-4076-8da2-1410375d0a64", 00:15:41.679 "is_configured": true, 00:15:41.679 "data_offset": 0, 00:15:41.679 "data_size": 65536 00:15:41.679 }, 00:15:41.679 { 00:15:41.679 "name": "BaseBdev2", 00:15:41.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.679 "is_configured": false, 00:15:41.679 "data_offset": 0, 00:15:41.679 "data_size": 0 00:15:41.679 }, 00:15:41.679 { 00:15:41.679 "name": "BaseBdev3", 00:15:41.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.679 "is_configured": false, 00:15:41.679 "data_offset": 0, 00:15:41.679 "data_size": 0 00:15:41.679 }, 00:15:41.679 { 00:15:41.679 "name": "BaseBdev4", 00:15:41.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.679 "is_configured": false, 00:15:41.679 "data_offset": 0, 00:15:41.679 "data_size": 0 00:15:41.679 } 00:15:41.679 ] 00:15:41.679 }' 00:15:41.679 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.679 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.246 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:42.246 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.246 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.246 [2024-12-06 06:42:00.607743] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:42.246 [2024-12-06 06:42:00.607810] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:42.246 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.246 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:42.246 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.246 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.246 [2024-12-06 06:42:00.615806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:42.246 [2024-12-06 06:42:00.618326] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:42.246 [2024-12-06 06:42:00.618379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:42.246 [2024-12-06 06:42:00.618396] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:42.246 [2024-12-06 06:42:00.618414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:42.246 [2024-12-06 06:42:00.618425] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:42.246 [2024-12-06 06:42:00.618439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:42.246 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.246 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:42.246 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:42.246 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:42.246 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.246 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.246 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:42.246 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.246 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:42.246 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.246 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.246 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.246 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.246 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.246 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.246 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.246 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.246 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.246 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.246 "name": "Existed_Raid", 00:15:42.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.246 "strip_size_kb": 64, 00:15:42.246 "state": "configuring", 00:15:42.246 "raid_level": "concat", 00:15:42.246 "superblock": false, 00:15:42.246 "num_base_bdevs": 4, 00:15:42.246 "num_base_bdevs_discovered": 1, 00:15:42.246 "num_base_bdevs_operational": 4, 00:15:42.246 "base_bdevs_list": [ 00:15:42.246 { 00:15:42.246 "name": "BaseBdev1", 00:15:42.246 "uuid": "a4577bb7-0940-4076-8da2-1410375d0a64", 00:15:42.246 "is_configured": true, 00:15:42.246 "data_offset": 0, 00:15:42.246 "data_size": 65536 00:15:42.246 }, 00:15:42.246 { 00:15:42.246 "name": "BaseBdev2", 00:15:42.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.246 "is_configured": false, 00:15:42.246 "data_offset": 0, 00:15:42.246 "data_size": 0 00:15:42.246 }, 00:15:42.246 { 00:15:42.246 "name": "BaseBdev3", 00:15:42.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.246 "is_configured": false, 00:15:42.246 "data_offset": 0, 00:15:42.246 "data_size": 0 00:15:42.246 }, 00:15:42.246 { 00:15:42.246 "name": "BaseBdev4", 00:15:42.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.246 "is_configured": false, 00:15:42.246 "data_offset": 0, 00:15:42.246 "data_size": 0 00:15:42.246 } 00:15:42.246 ] 00:15:42.246 }' 00:15:42.246 06:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.246 06:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.505 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:42.505 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.505 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.763 [2024-12-06 06:42:01.167128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:42.763 BaseBdev2 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.763 [ 00:15:42.763 { 00:15:42.763 "name": "BaseBdev2", 00:15:42.763 "aliases": [ 00:15:42.763 "fd467540-ca12-4e01-a577-791b2258c3ba" 00:15:42.763 ], 00:15:42.763 "product_name": "Malloc disk", 00:15:42.763 "block_size": 512, 00:15:42.763 "num_blocks": 65536, 00:15:42.763 "uuid": "fd467540-ca12-4e01-a577-791b2258c3ba", 00:15:42.763 "assigned_rate_limits": { 00:15:42.763 "rw_ios_per_sec": 0, 00:15:42.763 "rw_mbytes_per_sec": 0, 00:15:42.763 "r_mbytes_per_sec": 0, 00:15:42.763 "w_mbytes_per_sec": 0 00:15:42.763 }, 00:15:42.763 "claimed": true, 00:15:42.763 "claim_type": "exclusive_write", 00:15:42.763 "zoned": false, 00:15:42.763 "supported_io_types": { 00:15:42.763 "read": true, 00:15:42.763 "write": true, 00:15:42.763 "unmap": true, 00:15:42.763 "flush": true, 00:15:42.763 "reset": true, 00:15:42.763 "nvme_admin": false, 00:15:42.763 "nvme_io": false, 00:15:42.763 "nvme_io_md": false, 00:15:42.763 "write_zeroes": true, 00:15:42.763 "zcopy": true, 00:15:42.763 "get_zone_info": false, 00:15:42.763 "zone_management": false, 00:15:42.763 "zone_append": false, 00:15:42.763 "compare": false, 00:15:42.763 "compare_and_write": false, 00:15:42.763 "abort": true, 00:15:42.763 "seek_hole": false, 00:15:42.763 "seek_data": false, 00:15:42.763 "copy": true, 00:15:42.763 "nvme_iov_md": false 00:15:42.763 }, 00:15:42.763 "memory_domains": [ 00:15:42.763 { 00:15:42.763 "dma_device_id": "system", 00:15:42.763 "dma_device_type": 1 00:15:42.763 }, 00:15:42.763 { 00:15:42.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.763 "dma_device_type": 2 00:15:42.763 } 00:15:42.763 ], 00:15:42.763 "driver_specific": {} 00:15:42.763 } 00:15:42.763 ] 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.763 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.763 "name": "Existed_Raid", 00:15:42.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.763 "strip_size_kb": 64, 00:15:42.763 "state": "configuring", 00:15:42.763 "raid_level": "concat", 00:15:42.763 "superblock": false, 00:15:42.764 "num_base_bdevs": 4, 00:15:42.764 "num_base_bdevs_discovered": 2, 00:15:42.764 "num_base_bdevs_operational": 4, 00:15:42.764 "base_bdevs_list": [ 00:15:42.764 { 00:15:42.764 "name": "BaseBdev1", 00:15:42.764 "uuid": "a4577bb7-0940-4076-8da2-1410375d0a64", 00:15:42.764 "is_configured": true, 00:15:42.764 "data_offset": 0, 00:15:42.764 "data_size": 65536 00:15:42.764 }, 00:15:42.764 { 00:15:42.764 "name": "BaseBdev2", 00:15:42.764 "uuid": "fd467540-ca12-4e01-a577-791b2258c3ba", 00:15:42.764 "is_configured": true, 00:15:42.764 "data_offset": 0, 00:15:42.764 "data_size": 65536 00:15:42.764 }, 00:15:42.764 { 00:15:42.764 "name": "BaseBdev3", 00:15:42.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.764 "is_configured": false, 00:15:42.764 "data_offset": 0, 00:15:42.764 "data_size": 0 00:15:42.764 }, 00:15:42.764 { 00:15:42.764 "name": "BaseBdev4", 00:15:42.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.764 "is_configured": false, 00:15:42.764 "data_offset": 0, 00:15:42.764 "data_size": 0 00:15:42.764 } 00:15:42.764 ] 00:15:42.764 }' 00:15:42.764 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.764 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.329 [2024-12-06 06:42:01.739553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:43.329 BaseBdev3 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.329 [ 00:15:43.329 { 00:15:43.329 "name": "BaseBdev3", 00:15:43.329 "aliases": [ 00:15:43.329 "f6f11082-2abe-444b-83b2-393d7ffef893" 00:15:43.329 ], 00:15:43.329 "product_name": "Malloc disk", 00:15:43.329 "block_size": 512, 00:15:43.329 "num_blocks": 65536, 00:15:43.329 "uuid": "f6f11082-2abe-444b-83b2-393d7ffef893", 00:15:43.329 "assigned_rate_limits": { 00:15:43.329 "rw_ios_per_sec": 0, 00:15:43.329 "rw_mbytes_per_sec": 0, 00:15:43.329 "r_mbytes_per_sec": 0, 00:15:43.329 "w_mbytes_per_sec": 0 00:15:43.329 }, 00:15:43.329 "claimed": true, 00:15:43.329 "claim_type": "exclusive_write", 00:15:43.329 "zoned": false, 00:15:43.329 "supported_io_types": { 00:15:43.329 "read": true, 00:15:43.329 "write": true, 00:15:43.329 "unmap": true, 00:15:43.329 "flush": true, 00:15:43.329 "reset": true, 00:15:43.329 "nvme_admin": false, 00:15:43.329 "nvme_io": false, 00:15:43.329 "nvme_io_md": false, 00:15:43.329 "write_zeroes": true, 00:15:43.329 "zcopy": true, 00:15:43.329 "get_zone_info": false, 00:15:43.329 "zone_management": false, 00:15:43.329 "zone_append": false, 00:15:43.329 "compare": false, 00:15:43.329 "compare_and_write": false, 00:15:43.329 "abort": true, 00:15:43.329 "seek_hole": false, 00:15:43.329 "seek_data": false, 00:15:43.329 "copy": true, 00:15:43.329 "nvme_iov_md": false 00:15:43.329 }, 00:15:43.329 "memory_domains": [ 00:15:43.329 { 00:15:43.329 "dma_device_id": "system", 00:15:43.329 "dma_device_type": 1 00:15:43.329 }, 00:15:43.329 { 00:15:43.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.329 "dma_device_type": 2 00:15:43.329 } 00:15:43.329 ], 00:15:43.329 "driver_specific": {} 00:15:43.329 } 00:15:43.329 ] 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.329 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.330 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.330 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.330 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.330 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.330 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.330 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.330 "name": "Existed_Raid", 00:15:43.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.330 "strip_size_kb": 64, 00:15:43.330 "state": "configuring", 00:15:43.330 "raid_level": "concat", 00:15:43.330 "superblock": false, 00:15:43.330 "num_base_bdevs": 4, 00:15:43.330 "num_base_bdevs_discovered": 3, 00:15:43.330 "num_base_bdevs_operational": 4, 00:15:43.330 "base_bdevs_list": [ 00:15:43.330 { 00:15:43.330 "name": "BaseBdev1", 00:15:43.330 "uuid": "a4577bb7-0940-4076-8da2-1410375d0a64", 00:15:43.330 "is_configured": true, 00:15:43.330 "data_offset": 0, 00:15:43.330 "data_size": 65536 00:15:43.330 }, 00:15:43.330 { 00:15:43.330 "name": "BaseBdev2", 00:15:43.330 "uuid": "fd467540-ca12-4e01-a577-791b2258c3ba", 00:15:43.330 "is_configured": true, 00:15:43.330 "data_offset": 0, 00:15:43.330 "data_size": 65536 00:15:43.330 }, 00:15:43.330 { 00:15:43.330 "name": "BaseBdev3", 00:15:43.330 "uuid": "f6f11082-2abe-444b-83b2-393d7ffef893", 00:15:43.330 "is_configured": true, 00:15:43.330 "data_offset": 0, 00:15:43.330 "data_size": 65536 00:15:43.330 }, 00:15:43.330 { 00:15:43.330 "name": "BaseBdev4", 00:15:43.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.330 "is_configured": false, 00:15:43.330 "data_offset": 0, 00:15:43.330 "data_size": 0 00:15:43.330 } 00:15:43.330 ] 00:15:43.330 }' 00:15:43.330 06:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.330 06:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.896 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:43.896 06:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.896 06:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.896 [2024-12-06 06:42:02.287278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:43.896 [2024-12-06 06:42:02.287361] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:43.896 [2024-12-06 06:42:02.287376] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:43.896 [2024-12-06 06:42:02.287784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:43.896 [2024-12-06 06:42:02.287999] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:43.896 [2024-12-06 06:42:02.288028] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:43.896 [2024-12-06 06:42:02.288391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.896 BaseBdev4 00:15:43.896 06:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.896 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:43.896 06:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:43.896 06:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:43.896 06:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:43.896 06:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:43.896 06:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:43.896 06:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:43.896 06:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.896 06:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.896 06:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.896 06:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:43.896 06:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.896 06:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.896 [ 00:15:43.896 { 00:15:43.896 "name": "BaseBdev4", 00:15:43.896 "aliases": [ 00:15:43.896 "46a7fc37-dcbc-4728-ba6f-970b8cadbb92" 00:15:43.896 ], 00:15:43.896 "product_name": "Malloc disk", 00:15:43.896 "block_size": 512, 00:15:43.896 "num_blocks": 65536, 00:15:43.896 "uuid": "46a7fc37-dcbc-4728-ba6f-970b8cadbb92", 00:15:43.896 "assigned_rate_limits": { 00:15:43.896 "rw_ios_per_sec": 0, 00:15:43.896 "rw_mbytes_per_sec": 0, 00:15:43.896 "r_mbytes_per_sec": 0, 00:15:43.896 "w_mbytes_per_sec": 0 00:15:43.896 }, 00:15:43.896 "claimed": true, 00:15:43.896 "claim_type": "exclusive_write", 00:15:43.896 "zoned": false, 00:15:43.896 "supported_io_types": { 00:15:43.896 "read": true, 00:15:43.896 "write": true, 00:15:43.896 "unmap": true, 00:15:43.896 "flush": true, 00:15:43.896 "reset": true, 00:15:43.896 "nvme_admin": false, 00:15:43.896 "nvme_io": false, 00:15:43.896 "nvme_io_md": false, 00:15:43.896 "write_zeroes": true, 00:15:43.896 "zcopy": true, 00:15:43.896 "get_zone_info": false, 00:15:43.896 "zone_management": false, 00:15:43.896 "zone_append": false, 00:15:43.896 "compare": false, 00:15:43.896 "compare_and_write": false, 00:15:43.896 "abort": true, 00:15:43.896 "seek_hole": false, 00:15:43.896 "seek_data": false, 00:15:43.896 "copy": true, 00:15:43.896 "nvme_iov_md": false 00:15:43.896 }, 00:15:43.896 "memory_domains": [ 00:15:43.896 { 00:15:43.896 "dma_device_id": "system", 00:15:43.896 "dma_device_type": 1 00:15:43.896 }, 00:15:43.896 { 00:15:43.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.896 "dma_device_type": 2 00:15:43.896 } 00:15:43.896 ], 00:15:43.896 "driver_specific": {} 00:15:43.896 } 00:15:43.896 ] 00:15:43.896 06:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.896 06:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:43.896 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:43.896 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:43.896 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:43.896 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.896 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.896 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:43.896 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.897 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:43.897 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.897 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.897 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.897 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.897 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.897 06:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.897 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.897 06:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.897 06:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.897 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.897 "name": "Existed_Raid", 00:15:43.897 "uuid": "1ee8c276-2c8c-4a3c-bdf4-8e519988f06d", 00:15:43.897 "strip_size_kb": 64, 00:15:43.897 "state": "online", 00:15:43.897 "raid_level": "concat", 00:15:43.897 "superblock": false, 00:15:43.897 "num_base_bdevs": 4, 00:15:43.897 "num_base_bdevs_discovered": 4, 00:15:43.897 "num_base_bdevs_operational": 4, 00:15:43.897 "base_bdevs_list": [ 00:15:43.897 { 00:15:43.897 "name": "BaseBdev1", 00:15:43.897 "uuid": "a4577bb7-0940-4076-8da2-1410375d0a64", 00:15:43.897 "is_configured": true, 00:15:43.897 "data_offset": 0, 00:15:43.897 "data_size": 65536 00:15:43.897 }, 00:15:43.897 { 00:15:43.897 "name": "BaseBdev2", 00:15:43.897 "uuid": "fd467540-ca12-4e01-a577-791b2258c3ba", 00:15:43.897 "is_configured": true, 00:15:43.897 "data_offset": 0, 00:15:43.897 "data_size": 65536 00:15:43.897 }, 00:15:43.897 { 00:15:43.897 "name": "BaseBdev3", 00:15:43.897 "uuid": "f6f11082-2abe-444b-83b2-393d7ffef893", 00:15:43.897 "is_configured": true, 00:15:43.897 "data_offset": 0, 00:15:43.897 "data_size": 65536 00:15:43.897 }, 00:15:43.897 { 00:15:43.897 "name": "BaseBdev4", 00:15:43.897 "uuid": "46a7fc37-dcbc-4728-ba6f-970b8cadbb92", 00:15:43.897 "is_configured": true, 00:15:43.897 "data_offset": 0, 00:15:43.897 "data_size": 65536 00:15:43.897 } 00:15:43.897 ] 00:15:43.897 }' 00:15:43.897 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.897 06:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.462 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:44.462 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:44.462 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:44.462 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:44.462 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:44.462 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:44.462 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:44.462 06:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.462 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:44.462 06:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.462 [2024-12-06 06:42:02.871987] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:44.462 06:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.462 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:44.462 "name": "Existed_Raid", 00:15:44.462 "aliases": [ 00:15:44.462 "1ee8c276-2c8c-4a3c-bdf4-8e519988f06d" 00:15:44.462 ], 00:15:44.462 "product_name": "Raid Volume", 00:15:44.462 "block_size": 512, 00:15:44.462 "num_blocks": 262144, 00:15:44.462 "uuid": "1ee8c276-2c8c-4a3c-bdf4-8e519988f06d", 00:15:44.462 "assigned_rate_limits": { 00:15:44.462 "rw_ios_per_sec": 0, 00:15:44.462 "rw_mbytes_per_sec": 0, 00:15:44.462 "r_mbytes_per_sec": 0, 00:15:44.462 "w_mbytes_per_sec": 0 00:15:44.462 }, 00:15:44.462 "claimed": false, 00:15:44.462 "zoned": false, 00:15:44.462 "supported_io_types": { 00:15:44.462 "read": true, 00:15:44.462 "write": true, 00:15:44.462 "unmap": true, 00:15:44.462 "flush": true, 00:15:44.462 "reset": true, 00:15:44.462 "nvme_admin": false, 00:15:44.462 "nvme_io": false, 00:15:44.462 "nvme_io_md": false, 00:15:44.462 "write_zeroes": true, 00:15:44.462 "zcopy": false, 00:15:44.462 "get_zone_info": false, 00:15:44.462 "zone_management": false, 00:15:44.462 "zone_append": false, 00:15:44.462 "compare": false, 00:15:44.462 "compare_and_write": false, 00:15:44.462 "abort": false, 00:15:44.462 "seek_hole": false, 00:15:44.462 "seek_data": false, 00:15:44.462 "copy": false, 00:15:44.462 "nvme_iov_md": false 00:15:44.462 }, 00:15:44.462 "memory_domains": [ 00:15:44.462 { 00:15:44.462 "dma_device_id": "system", 00:15:44.462 "dma_device_type": 1 00:15:44.462 }, 00:15:44.462 { 00:15:44.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.462 "dma_device_type": 2 00:15:44.462 }, 00:15:44.462 { 00:15:44.462 "dma_device_id": "system", 00:15:44.462 "dma_device_type": 1 00:15:44.462 }, 00:15:44.462 { 00:15:44.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.462 "dma_device_type": 2 00:15:44.462 }, 00:15:44.463 { 00:15:44.463 "dma_device_id": "system", 00:15:44.463 "dma_device_type": 1 00:15:44.463 }, 00:15:44.463 { 00:15:44.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.463 "dma_device_type": 2 00:15:44.463 }, 00:15:44.463 { 00:15:44.463 "dma_device_id": "system", 00:15:44.463 "dma_device_type": 1 00:15:44.463 }, 00:15:44.463 { 00:15:44.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.463 "dma_device_type": 2 00:15:44.463 } 00:15:44.463 ], 00:15:44.463 "driver_specific": { 00:15:44.463 "raid": { 00:15:44.463 "uuid": "1ee8c276-2c8c-4a3c-bdf4-8e519988f06d", 00:15:44.463 "strip_size_kb": 64, 00:15:44.463 "state": "online", 00:15:44.463 "raid_level": "concat", 00:15:44.463 "superblock": false, 00:15:44.463 "num_base_bdevs": 4, 00:15:44.463 "num_base_bdevs_discovered": 4, 00:15:44.463 "num_base_bdevs_operational": 4, 00:15:44.463 "base_bdevs_list": [ 00:15:44.463 { 00:15:44.463 "name": "BaseBdev1", 00:15:44.463 "uuid": "a4577bb7-0940-4076-8da2-1410375d0a64", 00:15:44.463 "is_configured": true, 00:15:44.463 "data_offset": 0, 00:15:44.463 "data_size": 65536 00:15:44.463 }, 00:15:44.463 { 00:15:44.463 "name": "BaseBdev2", 00:15:44.463 "uuid": "fd467540-ca12-4e01-a577-791b2258c3ba", 00:15:44.463 "is_configured": true, 00:15:44.463 "data_offset": 0, 00:15:44.463 "data_size": 65536 00:15:44.463 }, 00:15:44.463 { 00:15:44.463 "name": "BaseBdev3", 00:15:44.463 "uuid": "f6f11082-2abe-444b-83b2-393d7ffef893", 00:15:44.463 "is_configured": true, 00:15:44.463 "data_offset": 0, 00:15:44.463 "data_size": 65536 00:15:44.463 }, 00:15:44.463 { 00:15:44.463 "name": "BaseBdev4", 00:15:44.463 "uuid": "46a7fc37-dcbc-4728-ba6f-970b8cadbb92", 00:15:44.463 "is_configured": true, 00:15:44.463 "data_offset": 0, 00:15:44.463 "data_size": 65536 00:15:44.463 } 00:15:44.463 ] 00:15:44.463 } 00:15:44.463 } 00:15:44.463 }' 00:15:44.463 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:44.463 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:44.463 BaseBdev2 00:15:44.463 BaseBdev3 00:15:44.463 BaseBdev4' 00:15:44.463 06:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:44.463 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:44.463 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:44.463 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:44.463 06:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.463 06:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.463 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:44.463 06:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.463 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:44.463 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:44.463 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:44.463 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:44.463 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:44.463 06:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.463 06:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.463 06:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.722 [2024-12-06 06:42:03.239753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:44.722 [2024-12-06 06:42:03.239793] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:44.722 [2024-12-06 06:42:03.239862] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.722 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.723 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.723 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.723 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.723 06:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.723 06:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.723 06:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.982 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.982 "name": "Existed_Raid", 00:15:44.982 "uuid": "1ee8c276-2c8c-4a3c-bdf4-8e519988f06d", 00:15:44.982 "strip_size_kb": 64, 00:15:44.982 "state": "offline", 00:15:44.982 "raid_level": "concat", 00:15:44.982 "superblock": false, 00:15:44.982 "num_base_bdevs": 4, 00:15:44.982 "num_base_bdevs_discovered": 3, 00:15:44.982 "num_base_bdevs_operational": 3, 00:15:44.982 "base_bdevs_list": [ 00:15:44.982 { 00:15:44.982 "name": null, 00:15:44.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.982 "is_configured": false, 00:15:44.982 "data_offset": 0, 00:15:44.982 "data_size": 65536 00:15:44.982 }, 00:15:44.982 { 00:15:44.982 "name": "BaseBdev2", 00:15:44.982 "uuid": "fd467540-ca12-4e01-a577-791b2258c3ba", 00:15:44.982 "is_configured": true, 00:15:44.982 "data_offset": 0, 00:15:44.982 "data_size": 65536 00:15:44.982 }, 00:15:44.982 { 00:15:44.982 "name": "BaseBdev3", 00:15:44.982 "uuid": "f6f11082-2abe-444b-83b2-393d7ffef893", 00:15:44.982 "is_configured": true, 00:15:44.982 "data_offset": 0, 00:15:44.982 "data_size": 65536 00:15:44.982 }, 00:15:44.982 { 00:15:44.982 "name": "BaseBdev4", 00:15:44.982 "uuid": "46a7fc37-dcbc-4728-ba6f-970b8cadbb92", 00:15:44.982 "is_configured": true, 00:15:44.982 "data_offset": 0, 00:15:44.982 "data_size": 65536 00:15:44.982 } 00:15:44.982 ] 00:15:44.982 }' 00:15:44.982 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.982 06:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.381 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:45.381 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:45.381 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.381 06:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.381 06:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.381 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:45.382 06:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.382 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:45.382 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:45.382 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:45.382 06:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.382 06:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.382 [2024-12-06 06:42:03.897856] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:45.382 06:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.382 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:45.382 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:45.382 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:45.382 06:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.382 06:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.382 06:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.644 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.644 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:45.644 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:45.644 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:45.644 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.644 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.644 [2024-12-06 06:42:04.043789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:45.644 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.644 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:45.644 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:45.644 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.644 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:45.644 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.644 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.644 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.644 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:45.644 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:45.644 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:45.644 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.644 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.644 [2024-12-06 06:42:04.193135] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:45.645 [2024-12-06 06:42:04.193337] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:45.645 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.645 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:45.645 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:45.645 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.645 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.645 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.645 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.906 BaseBdev2 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.906 [ 00:15:45.906 { 00:15:45.906 "name": "BaseBdev2", 00:15:45.906 "aliases": [ 00:15:45.906 "777030d4-5465-4054-9295-ff637f4a2863" 00:15:45.906 ], 00:15:45.906 "product_name": "Malloc disk", 00:15:45.906 "block_size": 512, 00:15:45.906 "num_blocks": 65536, 00:15:45.906 "uuid": "777030d4-5465-4054-9295-ff637f4a2863", 00:15:45.906 "assigned_rate_limits": { 00:15:45.906 "rw_ios_per_sec": 0, 00:15:45.906 "rw_mbytes_per_sec": 0, 00:15:45.906 "r_mbytes_per_sec": 0, 00:15:45.906 "w_mbytes_per_sec": 0 00:15:45.906 }, 00:15:45.906 "claimed": false, 00:15:45.906 "zoned": false, 00:15:45.906 "supported_io_types": { 00:15:45.906 "read": true, 00:15:45.906 "write": true, 00:15:45.906 "unmap": true, 00:15:45.906 "flush": true, 00:15:45.906 "reset": true, 00:15:45.906 "nvme_admin": false, 00:15:45.906 "nvme_io": false, 00:15:45.906 "nvme_io_md": false, 00:15:45.906 "write_zeroes": true, 00:15:45.906 "zcopy": true, 00:15:45.906 "get_zone_info": false, 00:15:45.906 "zone_management": false, 00:15:45.906 "zone_append": false, 00:15:45.906 "compare": false, 00:15:45.906 "compare_and_write": false, 00:15:45.906 "abort": true, 00:15:45.906 "seek_hole": false, 00:15:45.906 "seek_data": false, 00:15:45.906 "copy": true, 00:15:45.906 "nvme_iov_md": false 00:15:45.906 }, 00:15:45.906 "memory_domains": [ 00:15:45.906 { 00:15:45.906 "dma_device_id": "system", 00:15:45.906 "dma_device_type": 1 00:15:45.906 }, 00:15:45.906 { 00:15:45.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.906 "dma_device_type": 2 00:15:45.906 } 00:15:45.906 ], 00:15:45.906 "driver_specific": {} 00:15:45.906 } 00:15:45.906 ] 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.906 BaseBdev3 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:45.906 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:45.907 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:45.907 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:45.907 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.907 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.907 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.907 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:45.907 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.907 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.907 [ 00:15:45.907 { 00:15:45.907 "name": "BaseBdev3", 00:15:45.907 "aliases": [ 00:15:45.907 "b85873dd-62cd-4b49-b1f4-3e63c6a72556" 00:15:45.907 ], 00:15:45.907 "product_name": "Malloc disk", 00:15:45.907 "block_size": 512, 00:15:45.907 "num_blocks": 65536, 00:15:45.907 "uuid": "b85873dd-62cd-4b49-b1f4-3e63c6a72556", 00:15:45.907 "assigned_rate_limits": { 00:15:45.907 "rw_ios_per_sec": 0, 00:15:45.907 "rw_mbytes_per_sec": 0, 00:15:45.907 "r_mbytes_per_sec": 0, 00:15:45.907 "w_mbytes_per_sec": 0 00:15:45.907 }, 00:15:45.907 "claimed": false, 00:15:45.907 "zoned": false, 00:15:45.907 "supported_io_types": { 00:15:45.907 "read": true, 00:15:45.907 "write": true, 00:15:45.907 "unmap": true, 00:15:45.907 "flush": true, 00:15:45.907 "reset": true, 00:15:45.907 "nvme_admin": false, 00:15:45.907 "nvme_io": false, 00:15:45.907 "nvme_io_md": false, 00:15:45.907 "write_zeroes": true, 00:15:45.907 "zcopy": true, 00:15:45.907 "get_zone_info": false, 00:15:45.907 "zone_management": false, 00:15:45.907 "zone_append": false, 00:15:45.907 "compare": false, 00:15:45.907 "compare_and_write": false, 00:15:45.907 "abort": true, 00:15:45.907 "seek_hole": false, 00:15:45.907 "seek_data": false, 00:15:45.907 "copy": true, 00:15:45.907 "nvme_iov_md": false 00:15:45.907 }, 00:15:45.907 "memory_domains": [ 00:15:45.907 { 00:15:45.907 "dma_device_id": "system", 00:15:45.907 "dma_device_type": 1 00:15:45.907 }, 00:15:45.907 { 00:15:45.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.907 "dma_device_type": 2 00:15:45.907 } 00:15:45.907 ], 00:15:45.907 "driver_specific": {} 00:15:45.907 } 00:15:45.907 ] 00:15:45.907 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.907 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:45.907 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:45.907 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:45.907 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:45.907 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.907 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.907 BaseBdev4 00:15:45.907 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.907 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:45.907 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:45.907 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:45.907 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:45.907 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:45.907 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:45.907 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:45.907 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.907 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.907 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.907 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:45.907 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.907 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.907 [ 00:15:45.907 { 00:15:45.907 "name": "BaseBdev4", 00:15:45.907 "aliases": [ 00:15:45.907 "6be85416-493d-40aa-a201-44f071493e39" 00:15:45.907 ], 00:15:45.907 "product_name": "Malloc disk", 00:15:45.907 "block_size": 512, 00:15:45.907 "num_blocks": 65536, 00:15:45.907 "uuid": "6be85416-493d-40aa-a201-44f071493e39", 00:15:45.907 "assigned_rate_limits": { 00:15:45.907 "rw_ios_per_sec": 0, 00:15:45.907 "rw_mbytes_per_sec": 0, 00:15:45.907 "r_mbytes_per_sec": 0, 00:15:45.907 "w_mbytes_per_sec": 0 00:15:45.907 }, 00:15:45.907 "claimed": false, 00:15:45.907 "zoned": false, 00:15:45.907 "supported_io_types": { 00:15:45.907 "read": true, 00:15:45.907 "write": true, 00:15:45.907 "unmap": true, 00:15:45.907 "flush": true, 00:15:45.907 "reset": true, 00:15:45.907 "nvme_admin": false, 00:15:45.907 "nvme_io": false, 00:15:45.907 "nvme_io_md": false, 00:15:45.907 "write_zeroes": true, 00:15:45.907 "zcopy": true, 00:15:45.907 "get_zone_info": false, 00:15:45.907 "zone_management": false, 00:15:45.907 "zone_append": false, 00:15:45.907 "compare": false, 00:15:45.907 "compare_and_write": false, 00:15:45.907 "abort": true, 00:15:45.907 "seek_hole": false, 00:15:45.907 "seek_data": false, 00:15:45.907 "copy": true, 00:15:45.907 "nvme_iov_md": false 00:15:46.165 }, 00:15:46.165 "memory_domains": [ 00:15:46.165 { 00:15:46.165 "dma_device_id": "system", 00:15:46.165 "dma_device_type": 1 00:15:46.165 }, 00:15:46.165 { 00:15:46.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.165 "dma_device_type": 2 00:15:46.165 } 00:15:46.165 ], 00:15:46.165 "driver_specific": {} 00:15:46.165 } 00:15:46.165 ] 00:15:46.165 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.165 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:46.165 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:46.165 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:46.165 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:46.165 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.165 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.165 [2024-12-06 06:42:04.560255] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:46.165 [2024-12-06 06:42:04.560310] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:46.165 [2024-12-06 06:42:04.560341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:46.165 [2024-12-06 06:42:04.562735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:46.165 [2024-12-06 06:42:04.562939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:46.165 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.165 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:46.165 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.165 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.165 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:46.165 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.165 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:46.165 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.165 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.165 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.165 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.166 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.166 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.166 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.166 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.166 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.166 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.166 "name": "Existed_Raid", 00:15:46.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.166 "strip_size_kb": 64, 00:15:46.166 "state": "configuring", 00:15:46.166 "raid_level": "concat", 00:15:46.166 "superblock": false, 00:15:46.166 "num_base_bdevs": 4, 00:15:46.166 "num_base_bdevs_discovered": 3, 00:15:46.166 "num_base_bdevs_operational": 4, 00:15:46.166 "base_bdevs_list": [ 00:15:46.166 { 00:15:46.166 "name": "BaseBdev1", 00:15:46.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.166 "is_configured": false, 00:15:46.166 "data_offset": 0, 00:15:46.166 "data_size": 0 00:15:46.166 }, 00:15:46.166 { 00:15:46.166 "name": "BaseBdev2", 00:15:46.166 "uuid": "777030d4-5465-4054-9295-ff637f4a2863", 00:15:46.166 "is_configured": true, 00:15:46.166 "data_offset": 0, 00:15:46.166 "data_size": 65536 00:15:46.166 }, 00:15:46.166 { 00:15:46.166 "name": "BaseBdev3", 00:15:46.166 "uuid": "b85873dd-62cd-4b49-b1f4-3e63c6a72556", 00:15:46.166 "is_configured": true, 00:15:46.166 "data_offset": 0, 00:15:46.166 "data_size": 65536 00:15:46.166 }, 00:15:46.166 { 00:15:46.166 "name": "BaseBdev4", 00:15:46.166 "uuid": "6be85416-493d-40aa-a201-44f071493e39", 00:15:46.166 "is_configured": true, 00:15:46.166 "data_offset": 0, 00:15:46.166 "data_size": 65536 00:15:46.166 } 00:15:46.166 ] 00:15:46.166 }' 00:15:46.166 06:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.166 06:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.732 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:46.732 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.732 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.732 [2024-12-06 06:42:05.096418] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:46.732 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.733 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:46.733 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.733 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.733 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:46.733 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.733 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:46.733 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.733 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.733 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.733 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.733 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.733 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.733 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.733 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.733 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.733 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.733 "name": "Existed_Raid", 00:15:46.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.733 "strip_size_kb": 64, 00:15:46.733 "state": "configuring", 00:15:46.733 "raid_level": "concat", 00:15:46.733 "superblock": false, 00:15:46.733 "num_base_bdevs": 4, 00:15:46.733 "num_base_bdevs_discovered": 2, 00:15:46.733 "num_base_bdevs_operational": 4, 00:15:46.733 "base_bdevs_list": [ 00:15:46.733 { 00:15:46.733 "name": "BaseBdev1", 00:15:46.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.733 "is_configured": false, 00:15:46.733 "data_offset": 0, 00:15:46.733 "data_size": 0 00:15:46.733 }, 00:15:46.733 { 00:15:46.733 "name": null, 00:15:46.733 "uuid": "777030d4-5465-4054-9295-ff637f4a2863", 00:15:46.733 "is_configured": false, 00:15:46.733 "data_offset": 0, 00:15:46.733 "data_size": 65536 00:15:46.733 }, 00:15:46.733 { 00:15:46.733 "name": "BaseBdev3", 00:15:46.733 "uuid": "b85873dd-62cd-4b49-b1f4-3e63c6a72556", 00:15:46.733 "is_configured": true, 00:15:46.733 "data_offset": 0, 00:15:46.733 "data_size": 65536 00:15:46.733 }, 00:15:46.733 { 00:15:46.733 "name": "BaseBdev4", 00:15:46.733 "uuid": "6be85416-493d-40aa-a201-44f071493e39", 00:15:46.733 "is_configured": true, 00:15:46.733 "data_offset": 0, 00:15:46.733 "data_size": 65536 00:15:46.733 } 00:15:46.733 ] 00:15:46.733 }' 00:15:46.733 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.733 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.991 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.991 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:46.991 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.991 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.991 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.250 [2024-12-06 06:42:05.686774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:47.250 BaseBdev1 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.250 [ 00:15:47.250 { 00:15:47.250 "name": "BaseBdev1", 00:15:47.250 "aliases": [ 00:15:47.250 "8cf1ffd0-2f79-45ac-9830-76e82f32c344" 00:15:47.250 ], 00:15:47.250 "product_name": "Malloc disk", 00:15:47.250 "block_size": 512, 00:15:47.250 "num_blocks": 65536, 00:15:47.250 "uuid": "8cf1ffd0-2f79-45ac-9830-76e82f32c344", 00:15:47.250 "assigned_rate_limits": { 00:15:47.250 "rw_ios_per_sec": 0, 00:15:47.250 "rw_mbytes_per_sec": 0, 00:15:47.250 "r_mbytes_per_sec": 0, 00:15:47.250 "w_mbytes_per_sec": 0 00:15:47.250 }, 00:15:47.250 "claimed": true, 00:15:47.250 "claim_type": "exclusive_write", 00:15:47.250 "zoned": false, 00:15:47.250 "supported_io_types": { 00:15:47.250 "read": true, 00:15:47.250 "write": true, 00:15:47.250 "unmap": true, 00:15:47.250 "flush": true, 00:15:47.250 "reset": true, 00:15:47.250 "nvme_admin": false, 00:15:47.250 "nvme_io": false, 00:15:47.250 "nvme_io_md": false, 00:15:47.250 "write_zeroes": true, 00:15:47.250 "zcopy": true, 00:15:47.250 "get_zone_info": false, 00:15:47.250 "zone_management": false, 00:15:47.250 "zone_append": false, 00:15:47.250 "compare": false, 00:15:47.250 "compare_and_write": false, 00:15:47.250 "abort": true, 00:15:47.250 "seek_hole": false, 00:15:47.250 "seek_data": false, 00:15:47.250 "copy": true, 00:15:47.250 "nvme_iov_md": false 00:15:47.250 }, 00:15:47.250 "memory_domains": [ 00:15:47.250 { 00:15:47.250 "dma_device_id": "system", 00:15:47.250 "dma_device_type": 1 00:15:47.250 }, 00:15:47.250 { 00:15:47.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.250 "dma_device_type": 2 00:15:47.250 } 00:15:47.250 ], 00:15:47.250 "driver_specific": {} 00:15:47.250 } 00:15:47.250 ] 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.250 "name": "Existed_Raid", 00:15:47.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.250 "strip_size_kb": 64, 00:15:47.250 "state": "configuring", 00:15:47.250 "raid_level": "concat", 00:15:47.250 "superblock": false, 00:15:47.250 "num_base_bdevs": 4, 00:15:47.250 "num_base_bdevs_discovered": 3, 00:15:47.250 "num_base_bdevs_operational": 4, 00:15:47.250 "base_bdevs_list": [ 00:15:47.250 { 00:15:47.250 "name": "BaseBdev1", 00:15:47.250 "uuid": "8cf1ffd0-2f79-45ac-9830-76e82f32c344", 00:15:47.250 "is_configured": true, 00:15:47.250 "data_offset": 0, 00:15:47.250 "data_size": 65536 00:15:47.250 }, 00:15:47.250 { 00:15:47.250 "name": null, 00:15:47.250 "uuid": "777030d4-5465-4054-9295-ff637f4a2863", 00:15:47.250 "is_configured": false, 00:15:47.250 "data_offset": 0, 00:15:47.250 "data_size": 65536 00:15:47.250 }, 00:15:47.250 { 00:15:47.250 "name": "BaseBdev3", 00:15:47.250 "uuid": "b85873dd-62cd-4b49-b1f4-3e63c6a72556", 00:15:47.250 "is_configured": true, 00:15:47.250 "data_offset": 0, 00:15:47.250 "data_size": 65536 00:15:47.250 }, 00:15:47.250 { 00:15:47.250 "name": "BaseBdev4", 00:15:47.250 "uuid": "6be85416-493d-40aa-a201-44f071493e39", 00:15:47.250 "is_configured": true, 00:15:47.250 "data_offset": 0, 00:15:47.250 "data_size": 65536 00:15:47.250 } 00:15:47.250 ] 00:15:47.250 }' 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.250 06:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.815 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.815 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:47.815 06:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.815 06:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.815 06:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.815 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:47.815 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:47.815 06:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.815 06:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.815 [2024-12-06 06:42:06.383041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:47.815 06:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.815 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:47.815 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.815 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.815 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:47.815 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.815 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:47.815 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.815 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.815 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.815 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.815 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.815 06:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.815 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.815 06:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.815 06:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.815 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.815 "name": "Existed_Raid", 00:15:47.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.815 "strip_size_kb": 64, 00:15:47.815 "state": "configuring", 00:15:47.815 "raid_level": "concat", 00:15:47.815 "superblock": false, 00:15:47.815 "num_base_bdevs": 4, 00:15:47.815 "num_base_bdevs_discovered": 2, 00:15:47.815 "num_base_bdevs_operational": 4, 00:15:47.816 "base_bdevs_list": [ 00:15:47.816 { 00:15:47.816 "name": "BaseBdev1", 00:15:47.816 "uuid": "8cf1ffd0-2f79-45ac-9830-76e82f32c344", 00:15:47.816 "is_configured": true, 00:15:47.816 "data_offset": 0, 00:15:47.816 "data_size": 65536 00:15:47.816 }, 00:15:47.816 { 00:15:47.816 "name": null, 00:15:47.816 "uuid": "777030d4-5465-4054-9295-ff637f4a2863", 00:15:47.816 "is_configured": false, 00:15:47.816 "data_offset": 0, 00:15:47.816 "data_size": 65536 00:15:47.816 }, 00:15:47.816 { 00:15:47.816 "name": null, 00:15:47.816 "uuid": "b85873dd-62cd-4b49-b1f4-3e63c6a72556", 00:15:47.816 "is_configured": false, 00:15:47.816 "data_offset": 0, 00:15:47.816 "data_size": 65536 00:15:47.816 }, 00:15:47.816 { 00:15:47.816 "name": "BaseBdev4", 00:15:47.816 "uuid": "6be85416-493d-40aa-a201-44f071493e39", 00:15:47.816 "is_configured": true, 00:15:47.816 "data_offset": 0, 00:15:47.816 "data_size": 65536 00:15:47.816 } 00:15:47.816 ] 00:15:47.816 }' 00:15:47.816 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.816 06:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.380 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.380 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:48.380 06:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.380 06:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.380 06:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.380 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:48.380 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:48.380 06:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.380 06:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.380 [2024-12-06 06:42:06.943168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:48.380 06:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.380 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:48.380 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.380 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.380 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:48.380 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.380 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:48.380 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.380 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.380 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.380 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.380 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.380 06:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.380 06:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.380 06:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.380 06:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.380 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.380 "name": "Existed_Raid", 00:15:48.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.380 "strip_size_kb": 64, 00:15:48.380 "state": "configuring", 00:15:48.380 "raid_level": "concat", 00:15:48.380 "superblock": false, 00:15:48.380 "num_base_bdevs": 4, 00:15:48.380 "num_base_bdevs_discovered": 3, 00:15:48.380 "num_base_bdevs_operational": 4, 00:15:48.380 "base_bdevs_list": [ 00:15:48.380 { 00:15:48.380 "name": "BaseBdev1", 00:15:48.380 "uuid": "8cf1ffd0-2f79-45ac-9830-76e82f32c344", 00:15:48.380 "is_configured": true, 00:15:48.380 "data_offset": 0, 00:15:48.380 "data_size": 65536 00:15:48.380 }, 00:15:48.380 { 00:15:48.380 "name": null, 00:15:48.380 "uuid": "777030d4-5465-4054-9295-ff637f4a2863", 00:15:48.380 "is_configured": false, 00:15:48.380 "data_offset": 0, 00:15:48.380 "data_size": 65536 00:15:48.380 }, 00:15:48.380 { 00:15:48.380 "name": "BaseBdev3", 00:15:48.380 "uuid": "b85873dd-62cd-4b49-b1f4-3e63c6a72556", 00:15:48.380 "is_configured": true, 00:15:48.380 "data_offset": 0, 00:15:48.380 "data_size": 65536 00:15:48.380 }, 00:15:48.380 { 00:15:48.380 "name": "BaseBdev4", 00:15:48.380 "uuid": "6be85416-493d-40aa-a201-44f071493e39", 00:15:48.380 "is_configured": true, 00:15:48.380 "data_offset": 0, 00:15:48.380 "data_size": 65536 00:15:48.380 } 00:15:48.380 ] 00:15:48.380 }' 00:15:48.380 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.380 06:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.944 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:48.944 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.944 06:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.944 06:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.944 06:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.944 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:48.944 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:48.944 06:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.944 06:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.944 [2024-12-06 06:42:07.547391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:49.202 06:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.202 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:49.202 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.202 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.202 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:49.202 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.202 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:49.202 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.202 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.202 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.202 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.202 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.202 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.202 06:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.202 06:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.202 06:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.202 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.202 "name": "Existed_Raid", 00:15:49.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.202 "strip_size_kb": 64, 00:15:49.202 "state": "configuring", 00:15:49.202 "raid_level": "concat", 00:15:49.202 "superblock": false, 00:15:49.202 "num_base_bdevs": 4, 00:15:49.202 "num_base_bdevs_discovered": 2, 00:15:49.202 "num_base_bdevs_operational": 4, 00:15:49.202 "base_bdevs_list": [ 00:15:49.202 { 00:15:49.202 "name": null, 00:15:49.202 "uuid": "8cf1ffd0-2f79-45ac-9830-76e82f32c344", 00:15:49.202 "is_configured": false, 00:15:49.202 "data_offset": 0, 00:15:49.202 "data_size": 65536 00:15:49.202 }, 00:15:49.202 { 00:15:49.202 "name": null, 00:15:49.202 "uuid": "777030d4-5465-4054-9295-ff637f4a2863", 00:15:49.202 "is_configured": false, 00:15:49.202 "data_offset": 0, 00:15:49.202 "data_size": 65536 00:15:49.202 }, 00:15:49.202 { 00:15:49.202 "name": "BaseBdev3", 00:15:49.202 "uuid": "b85873dd-62cd-4b49-b1f4-3e63c6a72556", 00:15:49.202 "is_configured": true, 00:15:49.202 "data_offset": 0, 00:15:49.202 "data_size": 65536 00:15:49.202 }, 00:15:49.202 { 00:15:49.202 "name": "BaseBdev4", 00:15:49.202 "uuid": "6be85416-493d-40aa-a201-44f071493e39", 00:15:49.202 "is_configured": true, 00:15:49.202 "data_offset": 0, 00:15:49.202 "data_size": 65536 00:15:49.202 } 00:15:49.202 ] 00:15:49.202 }' 00:15:49.202 06:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.202 06:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.764 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.764 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:49.764 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.764 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.764 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.764 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:49.764 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:49.764 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.764 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.764 [2024-12-06 06:42:08.212361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:49.764 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.764 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:49.764 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.764 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.764 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:49.764 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.764 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:49.764 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.764 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.764 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.764 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.764 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.764 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.764 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.764 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.764 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.764 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.764 "name": "Existed_Raid", 00:15:49.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.764 "strip_size_kb": 64, 00:15:49.764 "state": "configuring", 00:15:49.764 "raid_level": "concat", 00:15:49.764 "superblock": false, 00:15:49.764 "num_base_bdevs": 4, 00:15:49.764 "num_base_bdevs_discovered": 3, 00:15:49.764 "num_base_bdevs_operational": 4, 00:15:49.764 "base_bdevs_list": [ 00:15:49.764 { 00:15:49.764 "name": null, 00:15:49.764 "uuid": "8cf1ffd0-2f79-45ac-9830-76e82f32c344", 00:15:49.764 "is_configured": false, 00:15:49.764 "data_offset": 0, 00:15:49.764 "data_size": 65536 00:15:49.764 }, 00:15:49.764 { 00:15:49.764 "name": "BaseBdev2", 00:15:49.764 "uuid": "777030d4-5465-4054-9295-ff637f4a2863", 00:15:49.765 "is_configured": true, 00:15:49.765 "data_offset": 0, 00:15:49.765 "data_size": 65536 00:15:49.765 }, 00:15:49.765 { 00:15:49.765 "name": "BaseBdev3", 00:15:49.765 "uuid": "b85873dd-62cd-4b49-b1f4-3e63c6a72556", 00:15:49.765 "is_configured": true, 00:15:49.765 "data_offset": 0, 00:15:49.765 "data_size": 65536 00:15:49.765 }, 00:15:49.765 { 00:15:49.765 "name": "BaseBdev4", 00:15:49.765 "uuid": "6be85416-493d-40aa-a201-44f071493e39", 00:15:49.765 "is_configured": true, 00:15:49.765 "data_offset": 0, 00:15:49.765 "data_size": 65536 00:15:49.765 } 00:15:49.765 ] 00:15:49.765 }' 00:15:49.765 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.765 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.329 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:50.329 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.329 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.329 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.329 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.329 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:50.329 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.329 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.329 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.329 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:50.329 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.329 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8cf1ffd0-2f79-45ac-9830-76e82f32c344 00:15:50.329 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.329 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.329 [2024-12-06 06:42:08.842517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:50.329 [2024-12-06 06:42:08.842601] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:50.329 [2024-12-06 06:42:08.842614] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:50.329 [2024-12-06 06:42:08.842951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:50.329 [2024-12-06 06:42:08.843131] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:50.329 [2024-12-06 06:42:08.843151] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:50.329 [2024-12-06 06:42:08.843448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.329 NewBaseBdev 00:15:50.329 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.329 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:50.329 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:50.329 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:50.329 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:50.329 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:50.329 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:50.329 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:50.329 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.329 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.329 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.329 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:50.329 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.329 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.329 [ 00:15:50.329 { 00:15:50.329 "name": "NewBaseBdev", 00:15:50.329 "aliases": [ 00:15:50.329 "8cf1ffd0-2f79-45ac-9830-76e82f32c344" 00:15:50.329 ], 00:15:50.329 "product_name": "Malloc disk", 00:15:50.329 "block_size": 512, 00:15:50.329 "num_blocks": 65536, 00:15:50.329 "uuid": "8cf1ffd0-2f79-45ac-9830-76e82f32c344", 00:15:50.329 "assigned_rate_limits": { 00:15:50.329 "rw_ios_per_sec": 0, 00:15:50.329 "rw_mbytes_per_sec": 0, 00:15:50.329 "r_mbytes_per_sec": 0, 00:15:50.329 "w_mbytes_per_sec": 0 00:15:50.329 }, 00:15:50.329 "claimed": true, 00:15:50.329 "claim_type": "exclusive_write", 00:15:50.329 "zoned": false, 00:15:50.329 "supported_io_types": { 00:15:50.329 "read": true, 00:15:50.329 "write": true, 00:15:50.329 "unmap": true, 00:15:50.329 "flush": true, 00:15:50.329 "reset": true, 00:15:50.329 "nvme_admin": false, 00:15:50.329 "nvme_io": false, 00:15:50.329 "nvme_io_md": false, 00:15:50.329 "write_zeroes": true, 00:15:50.329 "zcopy": true, 00:15:50.329 "get_zone_info": false, 00:15:50.329 "zone_management": false, 00:15:50.329 "zone_append": false, 00:15:50.329 "compare": false, 00:15:50.329 "compare_and_write": false, 00:15:50.329 "abort": true, 00:15:50.329 "seek_hole": false, 00:15:50.329 "seek_data": false, 00:15:50.329 "copy": true, 00:15:50.329 "nvme_iov_md": false 00:15:50.329 }, 00:15:50.329 "memory_domains": [ 00:15:50.329 { 00:15:50.329 "dma_device_id": "system", 00:15:50.329 "dma_device_type": 1 00:15:50.329 }, 00:15:50.329 { 00:15:50.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.330 "dma_device_type": 2 00:15:50.330 } 00:15:50.330 ], 00:15:50.330 "driver_specific": {} 00:15:50.330 } 00:15:50.330 ] 00:15:50.330 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.330 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:50.330 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:50.330 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.330 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.330 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:50.330 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.330 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.330 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.330 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.330 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.330 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.330 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.330 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.330 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.330 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.330 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.330 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.330 "name": "Existed_Raid", 00:15:50.330 "uuid": "f8a6d566-8739-4737-a042-d85f65a76af4", 00:15:50.330 "strip_size_kb": 64, 00:15:50.330 "state": "online", 00:15:50.330 "raid_level": "concat", 00:15:50.330 "superblock": false, 00:15:50.330 "num_base_bdevs": 4, 00:15:50.330 "num_base_bdevs_discovered": 4, 00:15:50.330 "num_base_bdevs_operational": 4, 00:15:50.330 "base_bdevs_list": [ 00:15:50.330 { 00:15:50.330 "name": "NewBaseBdev", 00:15:50.330 "uuid": "8cf1ffd0-2f79-45ac-9830-76e82f32c344", 00:15:50.330 "is_configured": true, 00:15:50.330 "data_offset": 0, 00:15:50.330 "data_size": 65536 00:15:50.330 }, 00:15:50.330 { 00:15:50.330 "name": "BaseBdev2", 00:15:50.330 "uuid": "777030d4-5465-4054-9295-ff637f4a2863", 00:15:50.330 "is_configured": true, 00:15:50.330 "data_offset": 0, 00:15:50.330 "data_size": 65536 00:15:50.330 }, 00:15:50.330 { 00:15:50.330 "name": "BaseBdev3", 00:15:50.330 "uuid": "b85873dd-62cd-4b49-b1f4-3e63c6a72556", 00:15:50.330 "is_configured": true, 00:15:50.330 "data_offset": 0, 00:15:50.330 "data_size": 65536 00:15:50.330 }, 00:15:50.330 { 00:15:50.330 "name": "BaseBdev4", 00:15:50.330 "uuid": "6be85416-493d-40aa-a201-44f071493e39", 00:15:50.330 "is_configured": true, 00:15:50.330 "data_offset": 0, 00:15:50.330 "data_size": 65536 00:15:50.330 } 00:15:50.330 ] 00:15:50.330 }' 00:15:50.330 06:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.330 06:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.976 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:50.976 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:50.976 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:50.976 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:50.976 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:50.976 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:50.976 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:50.976 06:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.976 06:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.976 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:50.976 [2024-12-06 06:42:09.359182] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:50.976 06:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.976 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:50.976 "name": "Existed_Raid", 00:15:50.976 "aliases": [ 00:15:50.976 "f8a6d566-8739-4737-a042-d85f65a76af4" 00:15:50.976 ], 00:15:50.976 "product_name": "Raid Volume", 00:15:50.976 "block_size": 512, 00:15:50.976 "num_blocks": 262144, 00:15:50.976 "uuid": "f8a6d566-8739-4737-a042-d85f65a76af4", 00:15:50.976 "assigned_rate_limits": { 00:15:50.976 "rw_ios_per_sec": 0, 00:15:50.976 "rw_mbytes_per_sec": 0, 00:15:50.976 "r_mbytes_per_sec": 0, 00:15:50.976 "w_mbytes_per_sec": 0 00:15:50.976 }, 00:15:50.976 "claimed": false, 00:15:50.976 "zoned": false, 00:15:50.976 "supported_io_types": { 00:15:50.976 "read": true, 00:15:50.976 "write": true, 00:15:50.976 "unmap": true, 00:15:50.976 "flush": true, 00:15:50.976 "reset": true, 00:15:50.976 "nvme_admin": false, 00:15:50.976 "nvme_io": false, 00:15:50.976 "nvme_io_md": false, 00:15:50.976 "write_zeroes": true, 00:15:50.976 "zcopy": false, 00:15:50.976 "get_zone_info": false, 00:15:50.976 "zone_management": false, 00:15:50.976 "zone_append": false, 00:15:50.976 "compare": false, 00:15:50.976 "compare_and_write": false, 00:15:50.976 "abort": false, 00:15:50.976 "seek_hole": false, 00:15:50.976 "seek_data": false, 00:15:50.976 "copy": false, 00:15:50.976 "nvme_iov_md": false 00:15:50.976 }, 00:15:50.976 "memory_domains": [ 00:15:50.976 { 00:15:50.976 "dma_device_id": "system", 00:15:50.976 "dma_device_type": 1 00:15:50.976 }, 00:15:50.976 { 00:15:50.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.976 "dma_device_type": 2 00:15:50.976 }, 00:15:50.976 { 00:15:50.976 "dma_device_id": "system", 00:15:50.976 "dma_device_type": 1 00:15:50.976 }, 00:15:50.976 { 00:15:50.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.976 "dma_device_type": 2 00:15:50.976 }, 00:15:50.976 { 00:15:50.976 "dma_device_id": "system", 00:15:50.976 "dma_device_type": 1 00:15:50.976 }, 00:15:50.976 { 00:15:50.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.976 "dma_device_type": 2 00:15:50.976 }, 00:15:50.976 { 00:15:50.976 "dma_device_id": "system", 00:15:50.976 "dma_device_type": 1 00:15:50.976 }, 00:15:50.976 { 00:15:50.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.976 "dma_device_type": 2 00:15:50.976 } 00:15:50.976 ], 00:15:50.976 "driver_specific": { 00:15:50.976 "raid": { 00:15:50.976 "uuid": "f8a6d566-8739-4737-a042-d85f65a76af4", 00:15:50.976 "strip_size_kb": 64, 00:15:50.976 "state": "online", 00:15:50.976 "raid_level": "concat", 00:15:50.976 "superblock": false, 00:15:50.976 "num_base_bdevs": 4, 00:15:50.976 "num_base_bdevs_discovered": 4, 00:15:50.976 "num_base_bdevs_operational": 4, 00:15:50.976 "base_bdevs_list": [ 00:15:50.976 { 00:15:50.976 "name": "NewBaseBdev", 00:15:50.977 "uuid": "8cf1ffd0-2f79-45ac-9830-76e82f32c344", 00:15:50.977 "is_configured": true, 00:15:50.977 "data_offset": 0, 00:15:50.977 "data_size": 65536 00:15:50.977 }, 00:15:50.977 { 00:15:50.977 "name": "BaseBdev2", 00:15:50.977 "uuid": "777030d4-5465-4054-9295-ff637f4a2863", 00:15:50.977 "is_configured": true, 00:15:50.977 "data_offset": 0, 00:15:50.977 "data_size": 65536 00:15:50.977 }, 00:15:50.977 { 00:15:50.977 "name": "BaseBdev3", 00:15:50.977 "uuid": "b85873dd-62cd-4b49-b1f4-3e63c6a72556", 00:15:50.977 "is_configured": true, 00:15:50.977 "data_offset": 0, 00:15:50.977 "data_size": 65536 00:15:50.977 }, 00:15:50.977 { 00:15:50.977 "name": "BaseBdev4", 00:15:50.977 "uuid": "6be85416-493d-40aa-a201-44f071493e39", 00:15:50.977 "is_configured": true, 00:15:50.977 "data_offset": 0, 00:15:50.977 "data_size": 65536 00:15:50.977 } 00:15:50.977 ] 00:15:50.977 } 00:15:50.977 } 00:15:50.977 }' 00:15:50.977 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:50.977 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:50.977 BaseBdev2 00:15:50.977 BaseBdev3 00:15:50.977 BaseBdev4' 00:15:50.977 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.977 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:50.977 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.977 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:50.977 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.977 06:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.977 06:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.977 06:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.977 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:50.977 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:50.977 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.977 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:50.977 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.977 06:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.977 06:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.977 06:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.234 [2024-12-06 06:42:09.742887] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:51.234 [2024-12-06 06:42:09.742925] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:51.234 [2024-12-06 06:42:09.743020] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.234 [2024-12-06 06:42:09.743116] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.234 [2024-12-06 06:42:09.743134] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71574 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71574 ']' 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71574 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71574 00:15:51.234 killing process with pid 71574 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71574' 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71574 00:15:51.234 [2024-12-06 06:42:09.783275] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:51.234 06:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71574 00:15:51.493 [2024-12-06 06:42:10.137728] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:52.868 00:15:52.868 real 0m12.716s 00:15:52.868 user 0m21.043s 00:15:52.868 sys 0m1.787s 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.868 ************************************ 00:15:52.868 END TEST raid_state_function_test 00:15:52.868 ************************************ 00:15:52.868 06:42:11 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:15:52.868 06:42:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:52.868 06:42:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:52.868 06:42:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:52.868 ************************************ 00:15:52.868 START TEST raid_state_function_test_sb 00:15:52.868 ************************************ 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72256 00:15:52.868 Process raid pid: 72256 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72256' 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72256 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72256 ']' 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:52.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:52.868 06:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.868 [2024-12-06 06:42:11.373214] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:15:52.868 [2024-12-06 06:42:11.373644] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.127 [2024-12-06 06:42:11.561172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.127 [2024-12-06 06:42:11.689992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.385 [2024-12-06 06:42:11.901606] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.385 [2024-12-06 06:42:11.901669] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.078 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:54.078 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:54.078 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:54.078 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.078 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.078 [2024-12-06 06:42:12.326114] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:54.078 [2024-12-06 06:42:12.326182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:54.078 [2024-12-06 06:42:12.326200] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:54.078 [2024-12-06 06:42:12.326217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:54.078 [2024-12-06 06:42:12.326227] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:54.078 [2024-12-06 06:42:12.326241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:54.078 [2024-12-06 06:42:12.326251] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:54.078 [2024-12-06 06:42:12.326265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:54.078 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.078 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:54.078 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.078 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.078 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:54.078 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.078 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:54.078 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.078 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.078 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.078 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.078 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.078 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.078 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.078 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.078 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.078 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.078 "name": "Existed_Raid", 00:15:54.078 "uuid": "d5f70773-e112-42f7-b47e-ba8c8d68f6e4", 00:15:54.078 "strip_size_kb": 64, 00:15:54.078 "state": "configuring", 00:15:54.078 "raid_level": "concat", 00:15:54.078 "superblock": true, 00:15:54.078 "num_base_bdevs": 4, 00:15:54.078 "num_base_bdevs_discovered": 0, 00:15:54.078 "num_base_bdevs_operational": 4, 00:15:54.078 "base_bdevs_list": [ 00:15:54.078 { 00:15:54.078 "name": "BaseBdev1", 00:15:54.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.078 "is_configured": false, 00:15:54.078 "data_offset": 0, 00:15:54.078 "data_size": 0 00:15:54.078 }, 00:15:54.078 { 00:15:54.078 "name": "BaseBdev2", 00:15:54.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.078 "is_configured": false, 00:15:54.078 "data_offset": 0, 00:15:54.078 "data_size": 0 00:15:54.078 }, 00:15:54.078 { 00:15:54.078 "name": "BaseBdev3", 00:15:54.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.078 "is_configured": false, 00:15:54.078 "data_offset": 0, 00:15:54.078 "data_size": 0 00:15:54.078 }, 00:15:54.078 { 00:15:54.078 "name": "BaseBdev4", 00:15:54.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.078 "is_configured": false, 00:15:54.078 "data_offset": 0, 00:15:54.078 "data_size": 0 00:15:54.078 } 00:15:54.078 ] 00:15:54.078 }' 00:15:54.078 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.078 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.338 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:54.338 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.338 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.338 [2024-12-06 06:42:12.842263] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:54.338 [2024-12-06 06:42:12.842472] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:54.338 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.338 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:54.338 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.338 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.338 [2024-12-06 06:42:12.854247] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:54.338 [2024-12-06 06:42:12.854448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:54.338 [2024-12-06 06:42:12.854641] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:54.338 [2024-12-06 06:42:12.854717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:54.338 [2024-12-06 06:42:12.854953] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:54.338 [2024-12-06 06:42:12.855024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:54.338 [2024-12-06 06:42:12.855204] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:54.338 [2024-12-06 06:42:12.855272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:54.338 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.338 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:54.338 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.338 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.338 [2024-12-06 06:42:12.907055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.338 BaseBdev1 00:15:54.338 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.338 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:54.338 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:54.338 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:54.338 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:54.338 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:54.338 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:54.339 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:54.339 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.339 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.339 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.339 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:54.339 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.339 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.339 [ 00:15:54.339 { 00:15:54.339 "name": "BaseBdev1", 00:15:54.339 "aliases": [ 00:15:54.339 "eba6c282-43bc-4871-a823-c656430836b4" 00:15:54.339 ], 00:15:54.339 "product_name": "Malloc disk", 00:15:54.339 "block_size": 512, 00:15:54.339 "num_blocks": 65536, 00:15:54.339 "uuid": "eba6c282-43bc-4871-a823-c656430836b4", 00:15:54.339 "assigned_rate_limits": { 00:15:54.339 "rw_ios_per_sec": 0, 00:15:54.339 "rw_mbytes_per_sec": 0, 00:15:54.339 "r_mbytes_per_sec": 0, 00:15:54.339 "w_mbytes_per_sec": 0 00:15:54.339 }, 00:15:54.339 "claimed": true, 00:15:54.339 "claim_type": "exclusive_write", 00:15:54.339 "zoned": false, 00:15:54.339 "supported_io_types": { 00:15:54.339 "read": true, 00:15:54.339 "write": true, 00:15:54.339 "unmap": true, 00:15:54.339 "flush": true, 00:15:54.339 "reset": true, 00:15:54.339 "nvme_admin": false, 00:15:54.339 "nvme_io": false, 00:15:54.339 "nvme_io_md": false, 00:15:54.339 "write_zeroes": true, 00:15:54.339 "zcopy": true, 00:15:54.339 "get_zone_info": false, 00:15:54.339 "zone_management": false, 00:15:54.339 "zone_append": false, 00:15:54.339 "compare": false, 00:15:54.339 "compare_and_write": false, 00:15:54.339 "abort": true, 00:15:54.339 "seek_hole": false, 00:15:54.339 "seek_data": false, 00:15:54.339 "copy": true, 00:15:54.339 "nvme_iov_md": false 00:15:54.339 }, 00:15:54.339 "memory_domains": [ 00:15:54.339 { 00:15:54.339 "dma_device_id": "system", 00:15:54.339 "dma_device_type": 1 00:15:54.339 }, 00:15:54.339 { 00:15:54.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.339 "dma_device_type": 2 00:15:54.339 } 00:15:54.339 ], 00:15:54.339 "driver_specific": {} 00:15:54.339 } 00:15:54.339 ] 00:15:54.339 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.339 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:54.339 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:54.339 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.339 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.339 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:54.339 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.339 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:54.339 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.339 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.339 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.339 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.339 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.339 06:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.339 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.339 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.339 06:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.598 06:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.598 "name": "Existed_Raid", 00:15:54.598 "uuid": "4663d972-0e42-4679-85aa-885926ac14bb", 00:15:54.598 "strip_size_kb": 64, 00:15:54.598 "state": "configuring", 00:15:54.598 "raid_level": "concat", 00:15:54.598 "superblock": true, 00:15:54.598 "num_base_bdevs": 4, 00:15:54.598 "num_base_bdevs_discovered": 1, 00:15:54.598 "num_base_bdevs_operational": 4, 00:15:54.598 "base_bdevs_list": [ 00:15:54.598 { 00:15:54.598 "name": "BaseBdev1", 00:15:54.598 "uuid": "eba6c282-43bc-4871-a823-c656430836b4", 00:15:54.598 "is_configured": true, 00:15:54.598 "data_offset": 2048, 00:15:54.598 "data_size": 63488 00:15:54.598 }, 00:15:54.598 { 00:15:54.598 "name": "BaseBdev2", 00:15:54.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.598 "is_configured": false, 00:15:54.598 "data_offset": 0, 00:15:54.598 "data_size": 0 00:15:54.598 }, 00:15:54.598 { 00:15:54.598 "name": "BaseBdev3", 00:15:54.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.598 "is_configured": false, 00:15:54.598 "data_offset": 0, 00:15:54.598 "data_size": 0 00:15:54.598 }, 00:15:54.598 { 00:15:54.598 "name": "BaseBdev4", 00:15:54.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.598 "is_configured": false, 00:15:54.598 "data_offset": 0, 00:15:54.598 "data_size": 0 00:15:54.598 } 00:15:54.598 ] 00:15:54.598 }' 00:15:54.598 06:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.598 06:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.858 06:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:54.858 06:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.858 06:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.858 [2024-12-06 06:42:13.471243] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:54.858 [2024-12-06 06:42:13.471471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:54.858 06:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.858 06:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:54.858 06:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.858 06:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.858 [2024-12-06 06:42:13.479307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.858 [2024-12-06 06:42:13.481916] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:54.858 [2024-12-06 06:42:13.482088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:54.858 [2024-12-06 06:42:13.482210] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:54.858 [2024-12-06 06:42:13.482369] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:54.858 [2024-12-06 06:42:13.482513] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:54.858 [2024-12-06 06:42:13.482610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:54.858 06:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.858 06:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:54.858 06:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:54.858 06:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:54.858 06:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.858 06:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.858 06:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:54.858 06:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.858 06:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:54.858 06:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.858 06:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.858 06:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.858 06:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.858 06:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.858 06:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.858 06:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.858 06:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.117 06:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.117 06:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.117 "name": "Existed_Raid", 00:15:55.117 "uuid": "e43bc892-78b1-4426-b9ab-dc4cc3ed90f6", 00:15:55.117 "strip_size_kb": 64, 00:15:55.117 "state": "configuring", 00:15:55.117 "raid_level": "concat", 00:15:55.117 "superblock": true, 00:15:55.117 "num_base_bdevs": 4, 00:15:55.117 "num_base_bdevs_discovered": 1, 00:15:55.117 "num_base_bdevs_operational": 4, 00:15:55.117 "base_bdevs_list": [ 00:15:55.117 { 00:15:55.117 "name": "BaseBdev1", 00:15:55.117 "uuid": "eba6c282-43bc-4871-a823-c656430836b4", 00:15:55.117 "is_configured": true, 00:15:55.117 "data_offset": 2048, 00:15:55.117 "data_size": 63488 00:15:55.117 }, 00:15:55.117 { 00:15:55.117 "name": "BaseBdev2", 00:15:55.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.117 "is_configured": false, 00:15:55.117 "data_offset": 0, 00:15:55.117 "data_size": 0 00:15:55.117 }, 00:15:55.117 { 00:15:55.117 "name": "BaseBdev3", 00:15:55.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.117 "is_configured": false, 00:15:55.117 "data_offset": 0, 00:15:55.117 "data_size": 0 00:15:55.117 }, 00:15:55.117 { 00:15:55.117 "name": "BaseBdev4", 00:15:55.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.117 "is_configured": false, 00:15:55.117 "data_offset": 0, 00:15:55.117 "data_size": 0 00:15:55.117 } 00:15:55.117 ] 00:15:55.117 }' 00:15:55.117 06:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.117 06:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.376 06:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:55.376 06:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.376 06:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.376 [2024-12-06 06:42:14.010264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:55.376 BaseBdev2 00:15:55.376 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.376 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:55.376 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:55.376 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:55.376 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:55.376 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:55.376 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:55.376 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:55.376 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.376 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.634 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.634 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:55.634 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.635 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.635 [ 00:15:55.635 { 00:15:55.635 "name": "BaseBdev2", 00:15:55.635 "aliases": [ 00:15:55.635 "8abe1f0a-7224-47e8-9212-2b565a84115e" 00:15:55.635 ], 00:15:55.635 "product_name": "Malloc disk", 00:15:55.635 "block_size": 512, 00:15:55.635 "num_blocks": 65536, 00:15:55.635 "uuid": "8abe1f0a-7224-47e8-9212-2b565a84115e", 00:15:55.635 "assigned_rate_limits": { 00:15:55.635 "rw_ios_per_sec": 0, 00:15:55.635 "rw_mbytes_per_sec": 0, 00:15:55.635 "r_mbytes_per_sec": 0, 00:15:55.635 "w_mbytes_per_sec": 0 00:15:55.635 }, 00:15:55.635 "claimed": true, 00:15:55.635 "claim_type": "exclusive_write", 00:15:55.635 "zoned": false, 00:15:55.635 "supported_io_types": { 00:15:55.635 "read": true, 00:15:55.635 "write": true, 00:15:55.635 "unmap": true, 00:15:55.635 "flush": true, 00:15:55.635 "reset": true, 00:15:55.635 "nvme_admin": false, 00:15:55.635 "nvme_io": false, 00:15:55.635 "nvme_io_md": false, 00:15:55.635 "write_zeroes": true, 00:15:55.635 "zcopy": true, 00:15:55.635 "get_zone_info": false, 00:15:55.635 "zone_management": false, 00:15:55.635 "zone_append": false, 00:15:55.635 "compare": false, 00:15:55.635 "compare_and_write": false, 00:15:55.635 "abort": true, 00:15:55.635 "seek_hole": false, 00:15:55.635 "seek_data": false, 00:15:55.635 "copy": true, 00:15:55.635 "nvme_iov_md": false 00:15:55.635 }, 00:15:55.635 "memory_domains": [ 00:15:55.635 { 00:15:55.635 "dma_device_id": "system", 00:15:55.635 "dma_device_type": 1 00:15:55.635 }, 00:15:55.635 { 00:15:55.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.635 "dma_device_type": 2 00:15:55.635 } 00:15:55.635 ], 00:15:55.635 "driver_specific": {} 00:15:55.635 } 00:15:55.635 ] 00:15:55.635 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.635 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:55.635 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:55.635 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:55.635 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:55.635 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.635 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.635 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:55.635 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.635 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:55.635 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.635 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.635 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.635 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.635 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.635 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.635 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.635 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.635 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.635 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.635 "name": "Existed_Raid", 00:15:55.635 "uuid": "e43bc892-78b1-4426-b9ab-dc4cc3ed90f6", 00:15:55.635 "strip_size_kb": 64, 00:15:55.635 "state": "configuring", 00:15:55.635 "raid_level": "concat", 00:15:55.635 "superblock": true, 00:15:55.635 "num_base_bdevs": 4, 00:15:55.635 "num_base_bdevs_discovered": 2, 00:15:55.635 "num_base_bdevs_operational": 4, 00:15:55.635 "base_bdevs_list": [ 00:15:55.635 { 00:15:55.635 "name": "BaseBdev1", 00:15:55.635 "uuid": "eba6c282-43bc-4871-a823-c656430836b4", 00:15:55.635 "is_configured": true, 00:15:55.635 "data_offset": 2048, 00:15:55.635 "data_size": 63488 00:15:55.635 }, 00:15:55.635 { 00:15:55.635 "name": "BaseBdev2", 00:15:55.635 "uuid": "8abe1f0a-7224-47e8-9212-2b565a84115e", 00:15:55.635 "is_configured": true, 00:15:55.635 "data_offset": 2048, 00:15:55.635 "data_size": 63488 00:15:55.635 }, 00:15:55.635 { 00:15:55.635 "name": "BaseBdev3", 00:15:55.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.635 "is_configured": false, 00:15:55.635 "data_offset": 0, 00:15:55.635 "data_size": 0 00:15:55.635 }, 00:15:55.635 { 00:15:55.635 "name": "BaseBdev4", 00:15:55.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.635 "is_configured": false, 00:15:55.635 "data_offset": 0, 00:15:55.635 "data_size": 0 00:15:55.635 } 00:15:55.635 ] 00:15:55.635 }' 00:15:55.635 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.635 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.202 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:56.202 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.202 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.202 [2024-12-06 06:42:14.612576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:56.202 BaseBdev3 00:15:56.202 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.202 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:56.202 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:56.202 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:56.202 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:56.202 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:56.202 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:56.202 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:56.202 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.202 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.202 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.202 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:56.202 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.202 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.202 [ 00:15:56.202 { 00:15:56.202 "name": "BaseBdev3", 00:15:56.202 "aliases": [ 00:15:56.202 "90e2bb21-8a51-4afb-907c-c08ba763431b" 00:15:56.202 ], 00:15:56.202 "product_name": "Malloc disk", 00:15:56.202 "block_size": 512, 00:15:56.202 "num_blocks": 65536, 00:15:56.202 "uuid": "90e2bb21-8a51-4afb-907c-c08ba763431b", 00:15:56.202 "assigned_rate_limits": { 00:15:56.202 "rw_ios_per_sec": 0, 00:15:56.202 "rw_mbytes_per_sec": 0, 00:15:56.202 "r_mbytes_per_sec": 0, 00:15:56.202 "w_mbytes_per_sec": 0 00:15:56.202 }, 00:15:56.202 "claimed": true, 00:15:56.202 "claim_type": "exclusive_write", 00:15:56.202 "zoned": false, 00:15:56.202 "supported_io_types": { 00:15:56.202 "read": true, 00:15:56.202 "write": true, 00:15:56.202 "unmap": true, 00:15:56.202 "flush": true, 00:15:56.202 "reset": true, 00:15:56.202 "nvme_admin": false, 00:15:56.202 "nvme_io": false, 00:15:56.202 "nvme_io_md": false, 00:15:56.202 "write_zeroes": true, 00:15:56.202 "zcopy": true, 00:15:56.202 "get_zone_info": false, 00:15:56.202 "zone_management": false, 00:15:56.202 "zone_append": false, 00:15:56.202 "compare": false, 00:15:56.202 "compare_and_write": false, 00:15:56.202 "abort": true, 00:15:56.202 "seek_hole": false, 00:15:56.202 "seek_data": false, 00:15:56.202 "copy": true, 00:15:56.202 "nvme_iov_md": false 00:15:56.202 }, 00:15:56.202 "memory_domains": [ 00:15:56.202 { 00:15:56.202 "dma_device_id": "system", 00:15:56.202 "dma_device_type": 1 00:15:56.202 }, 00:15:56.202 { 00:15:56.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.202 "dma_device_type": 2 00:15:56.202 } 00:15:56.202 ], 00:15:56.202 "driver_specific": {} 00:15:56.202 } 00:15:56.202 ] 00:15:56.202 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.202 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:56.202 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:56.202 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:56.202 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:56.202 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.202 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.202 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:56.202 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.202 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.202 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.203 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.203 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.203 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.203 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.203 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.203 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.203 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.203 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.203 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.203 "name": "Existed_Raid", 00:15:56.203 "uuid": "e43bc892-78b1-4426-b9ab-dc4cc3ed90f6", 00:15:56.203 "strip_size_kb": 64, 00:15:56.203 "state": "configuring", 00:15:56.203 "raid_level": "concat", 00:15:56.203 "superblock": true, 00:15:56.203 "num_base_bdevs": 4, 00:15:56.203 "num_base_bdevs_discovered": 3, 00:15:56.203 "num_base_bdevs_operational": 4, 00:15:56.203 "base_bdevs_list": [ 00:15:56.203 { 00:15:56.203 "name": "BaseBdev1", 00:15:56.203 "uuid": "eba6c282-43bc-4871-a823-c656430836b4", 00:15:56.203 "is_configured": true, 00:15:56.203 "data_offset": 2048, 00:15:56.203 "data_size": 63488 00:15:56.203 }, 00:15:56.203 { 00:15:56.203 "name": "BaseBdev2", 00:15:56.203 "uuid": "8abe1f0a-7224-47e8-9212-2b565a84115e", 00:15:56.203 "is_configured": true, 00:15:56.203 "data_offset": 2048, 00:15:56.203 "data_size": 63488 00:15:56.203 }, 00:15:56.203 { 00:15:56.203 "name": "BaseBdev3", 00:15:56.203 "uuid": "90e2bb21-8a51-4afb-907c-c08ba763431b", 00:15:56.203 "is_configured": true, 00:15:56.203 "data_offset": 2048, 00:15:56.203 "data_size": 63488 00:15:56.203 }, 00:15:56.203 { 00:15:56.203 "name": "BaseBdev4", 00:15:56.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.203 "is_configured": false, 00:15:56.203 "data_offset": 0, 00:15:56.203 "data_size": 0 00:15:56.203 } 00:15:56.203 ] 00:15:56.203 }' 00:15:56.203 06:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.203 06:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.770 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:56.770 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.770 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.770 [2024-12-06 06:42:15.203904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:56.770 [2024-12-06 06:42:15.204244] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:56.770 [2024-12-06 06:42:15.204265] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:56.770 BaseBdev4 00:15:56.770 [2024-12-06 06:42:15.204655] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:56.770 [2024-12-06 06:42:15.204852] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:56.770 [2024-12-06 06:42:15.204880] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:56.770 [2024-12-06 06:42:15.205056] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.770 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.770 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:56.770 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:56.770 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:56.770 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:56.770 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:56.770 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:56.770 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:56.770 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.770 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.770 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.770 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:56.770 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.770 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.770 [ 00:15:56.770 { 00:15:56.770 "name": "BaseBdev4", 00:15:56.770 "aliases": [ 00:15:56.770 "6ee08b93-3b64-467b-9e37-b47e776608ea" 00:15:56.770 ], 00:15:56.770 "product_name": "Malloc disk", 00:15:56.770 "block_size": 512, 00:15:56.770 "num_blocks": 65536, 00:15:56.770 "uuid": "6ee08b93-3b64-467b-9e37-b47e776608ea", 00:15:56.770 "assigned_rate_limits": { 00:15:56.770 "rw_ios_per_sec": 0, 00:15:56.770 "rw_mbytes_per_sec": 0, 00:15:56.770 "r_mbytes_per_sec": 0, 00:15:56.770 "w_mbytes_per_sec": 0 00:15:56.770 }, 00:15:56.770 "claimed": true, 00:15:56.770 "claim_type": "exclusive_write", 00:15:56.770 "zoned": false, 00:15:56.770 "supported_io_types": { 00:15:56.770 "read": true, 00:15:56.770 "write": true, 00:15:56.770 "unmap": true, 00:15:56.770 "flush": true, 00:15:56.770 "reset": true, 00:15:56.770 "nvme_admin": false, 00:15:56.770 "nvme_io": false, 00:15:56.770 "nvme_io_md": false, 00:15:56.770 "write_zeroes": true, 00:15:56.770 "zcopy": true, 00:15:56.770 "get_zone_info": false, 00:15:56.770 "zone_management": false, 00:15:56.770 "zone_append": false, 00:15:56.770 "compare": false, 00:15:56.770 "compare_and_write": false, 00:15:56.770 "abort": true, 00:15:56.770 "seek_hole": false, 00:15:56.770 "seek_data": false, 00:15:56.770 "copy": true, 00:15:56.770 "nvme_iov_md": false 00:15:56.770 }, 00:15:56.770 "memory_domains": [ 00:15:56.770 { 00:15:56.770 "dma_device_id": "system", 00:15:56.770 "dma_device_type": 1 00:15:56.770 }, 00:15:56.770 { 00:15:56.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.770 "dma_device_type": 2 00:15:56.770 } 00:15:56.770 ], 00:15:56.770 "driver_specific": {} 00:15:56.770 } 00:15:56.770 ] 00:15:56.770 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.770 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:56.770 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:56.770 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:56.770 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:56.770 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.770 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.770 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:56.770 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.770 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.770 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.770 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.771 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.771 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.771 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.771 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.771 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.771 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.771 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.771 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.771 "name": "Existed_Raid", 00:15:56.771 "uuid": "e43bc892-78b1-4426-b9ab-dc4cc3ed90f6", 00:15:56.771 "strip_size_kb": 64, 00:15:56.771 "state": "online", 00:15:56.771 "raid_level": "concat", 00:15:56.771 "superblock": true, 00:15:56.771 "num_base_bdevs": 4, 00:15:56.771 "num_base_bdevs_discovered": 4, 00:15:56.771 "num_base_bdevs_operational": 4, 00:15:56.771 "base_bdevs_list": [ 00:15:56.771 { 00:15:56.771 "name": "BaseBdev1", 00:15:56.771 "uuid": "eba6c282-43bc-4871-a823-c656430836b4", 00:15:56.771 "is_configured": true, 00:15:56.771 "data_offset": 2048, 00:15:56.771 "data_size": 63488 00:15:56.771 }, 00:15:56.771 { 00:15:56.771 "name": "BaseBdev2", 00:15:56.771 "uuid": "8abe1f0a-7224-47e8-9212-2b565a84115e", 00:15:56.771 "is_configured": true, 00:15:56.771 "data_offset": 2048, 00:15:56.771 "data_size": 63488 00:15:56.771 }, 00:15:56.771 { 00:15:56.771 "name": "BaseBdev3", 00:15:56.771 "uuid": "90e2bb21-8a51-4afb-907c-c08ba763431b", 00:15:56.771 "is_configured": true, 00:15:56.771 "data_offset": 2048, 00:15:56.771 "data_size": 63488 00:15:56.771 }, 00:15:56.771 { 00:15:56.771 "name": "BaseBdev4", 00:15:56.771 "uuid": "6ee08b93-3b64-467b-9e37-b47e776608ea", 00:15:56.771 "is_configured": true, 00:15:56.771 "data_offset": 2048, 00:15:56.771 "data_size": 63488 00:15:56.771 } 00:15:56.771 ] 00:15:56.771 }' 00:15:56.771 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.771 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.337 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:57.337 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:57.337 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:57.337 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:57.337 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:57.337 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:57.337 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:57.337 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.337 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:57.337 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.337 [2024-12-06 06:42:15.736571] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:57.337 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.337 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:57.337 "name": "Existed_Raid", 00:15:57.337 "aliases": [ 00:15:57.337 "e43bc892-78b1-4426-b9ab-dc4cc3ed90f6" 00:15:57.337 ], 00:15:57.337 "product_name": "Raid Volume", 00:15:57.337 "block_size": 512, 00:15:57.337 "num_blocks": 253952, 00:15:57.337 "uuid": "e43bc892-78b1-4426-b9ab-dc4cc3ed90f6", 00:15:57.337 "assigned_rate_limits": { 00:15:57.337 "rw_ios_per_sec": 0, 00:15:57.337 "rw_mbytes_per_sec": 0, 00:15:57.337 "r_mbytes_per_sec": 0, 00:15:57.337 "w_mbytes_per_sec": 0 00:15:57.337 }, 00:15:57.337 "claimed": false, 00:15:57.337 "zoned": false, 00:15:57.337 "supported_io_types": { 00:15:57.337 "read": true, 00:15:57.337 "write": true, 00:15:57.337 "unmap": true, 00:15:57.337 "flush": true, 00:15:57.337 "reset": true, 00:15:57.337 "nvme_admin": false, 00:15:57.337 "nvme_io": false, 00:15:57.337 "nvme_io_md": false, 00:15:57.337 "write_zeroes": true, 00:15:57.337 "zcopy": false, 00:15:57.337 "get_zone_info": false, 00:15:57.337 "zone_management": false, 00:15:57.337 "zone_append": false, 00:15:57.337 "compare": false, 00:15:57.337 "compare_and_write": false, 00:15:57.337 "abort": false, 00:15:57.337 "seek_hole": false, 00:15:57.337 "seek_data": false, 00:15:57.337 "copy": false, 00:15:57.337 "nvme_iov_md": false 00:15:57.337 }, 00:15:57.337 "memory_domains": [ 00:15:57.337 { 00:15:57.337 "dma_device_id": "system", 00:15:57.337 "dma_device_type": 1 00:15:57.337 }, 00:15:57.337 { 00:15:57.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.337 "dma_device_type": 2 00:15:57.337 }, 00:15:57.337 { 00:15:57.337 "dma_device_id": "system", 00:15:57.337 "dma_device_type": 1 00:15:57.337 }, 00:15:57.338 { 00:15:57.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.338 "dma_device_type": 2 00:15:57.338 }, 00:15:57.338 { 00:15:57.338 "dma_device_id": "system", 00:15:57.338 "dma_device_type": 1 00:15:57.338 }, 00:15:57.338 { 00:15:57.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.338 "dma_device_type": 2 00:15:57.338 }, 00:15:57.338 { 00:15:57.338 "dma_device_id": "system", 00:15:57.338 "dma_device_type": 1 00:15:57.338 }, 00:15:57.338 { 00:15:57.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.338 "dma_device_type": 2 00:15:57.338 } 00:15:57.338 ], 00:15:57.338 "driver_specific": { 00:15:57.338 "raid": { 00:15:57.338 "uuid": "e43bc892-78b1-4426-b9ab-dc4cc3ed90f6", 00:15:57.338 "strip_size_kb": 64, 00:15:57.338 "state": "online", 00:15:57.338 "raid_level": "concat", 00:15:57.338 "superblock": true, 00:15:57.338 "num_base_bdevs": 4, 00:15:57.338 "num_base_bdevs_discovered": 4, 00:15:57.338 "num_base_bdevs_operational": 4, 00:15:57.338 "base_bdevs_list": [ 00:15:57.338 { 00:15:57.338 "name": "BaseBdev1", 00:15:57.338 "uuid": "eba6c282-43bc-4871-a823-c656430836b4", 00:15:57.338 "is_configured": true, 00:15:57.338 "data_offset": 2048, 00:15:57.338 "data_size": 63488 00:15:57.338 }, 00:15:57.338 { 00:15:57.338 "name": "BaseBdev2", 00:15:57.338 "uuid": "8abe1f0a-7224-47e8-9212-2b565a84115e", 00:15:57.338 "is_configured": true, 00:15:57.338 "data_offset": 2048, 00:15:57.338 "data_size": 63488 00:15:57.338 }, 00:15:57.338 { 00:15:57.338 "name": "BaseBdev3", 00:15:57.338 "uuid": "90e2bb21-8a51-4afb-907c-c08ba763431b", 00:15:57.338 "is_configured": true, 00:15:57.338 "data_offset": 2048, 00:15:57.338 "data_size": 63488 00:15:57.338 }, 00:15:57.338 { 00:15:57.338 "name": "BaseBdev4", 00:15:57.338 "uuid": "6ee08b93-3b64-467b-9e37-b47e776608ea", 00:15:57.338 "is_configured": true, 00:15:57.338 "data_offset": 2048, 00:15:57.338 "data_size": 63488 00:15:57.338 } 00:15:57.338 ] 00:15:57.338 } 00:15:57.338 } 00:15:57.338 }' 00:15:57.338 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:57.338 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:57.338 BaseBdev2 00:15:57.338 BaseBdev3 00:15:57.338 BaseBdev4' 00:15:57.338 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.338 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:57.338 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.338 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:57.338 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.338 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.338 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.338 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.338 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.338 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.338 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.338 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:57.338 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.338 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.338 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.338 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.597 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.597 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.597 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.597 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:57.597 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.597 06:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.597 06:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.597 [2024-12-06 06:42:16.092413] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:57.597 [2024-12-06 06:42:16.092453] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:57.597 [2024-12-06 06:42:16.092518] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.597 "name": "Existed_Raid", 00:15:57.597 "uuid": "e43bc892-78b1-4426-b9ab-dc4cc3ed90f6", 00:15:57.597 "strip_size_kb": 64, 00:15:57.597 "state": "offline", 00:15:57.597 "raid_level": "concat", 00:15:57.597 "superblock": true, 00:15:57.597 "num_base_bdevs": 4, 00:15:57.597 "num_base_bdevs_discovered": 3, 00:15:57.597 "num_base_bdevs_operational": 3, 00:15:57.597 "base_bdevs_list": [ 00:15:57.597 { 00:15:57.597 "name": null, 00:15:57.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.597 "is_configured": false, 00:15:57.597 "data_offset": 0, 00:15:57.597 "data_size": 63488 00:15:57.597 }, 00:15:57.597 { 00:15:57.597 "name": "BaseBdev2", 00:15:57.597 "uuid": "8abe1f0a-7224-47e8-9212-2b565a84115e", 00:15:57.597 "is_configured": true, 00:15:57.597 "data_offset": 2048, 00:15:57.597 "data_size": 63488 00:15:57.597 }, 00:15:57.597 { 00:15:57.597 "name": "BaseBdev3", 00:15:57.597 "uuid": "90e2bb21-8a51-4afb-907c-c08ba763431b", 00:15:57.597 "is_configured": true, 00:15:57.597 "data_offset": 2048, 00:15:57.597 "data_size": 63488 00:15:57.597 }, 00:15:57.597 { 00:15:57.597 "name": "BaseBdev4", 00:15:57.597 "uuid": "6ee08b93-3b64-467b-9e37-b47e776608ea", 00:15:57.597 "is_configured": true, 00:15:57.597 "data_offset": 2048, 00:15:57.597 "data_size": 63488 00:15:57.597 } 00:15:57.597 ] 00:15:57.597 }' 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.597 06:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.164 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:58.164 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:58.164 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.164 06:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.164 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:58.164 06:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.164 06:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.164 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:58.164 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:58.164 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:58.164 06:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.164 06:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.164 [2024-12-06 06:42:16.751707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:58.423 06:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.423 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:58.424 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:58.424 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.424 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:58.424 06:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.424 06:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.424 06:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.424 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:58.424 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:58.424 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:58.424 06:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.424 06:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.424 [2024-12-06 06:42:16.896789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:58.424 06:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.424 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:58.424 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:58.424 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.424 06:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.424 06:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.424 06:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:58.424 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.424 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:58.424 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:58.424 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:58.424 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.424 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.424 [2024-12-06 06:42:17.038049] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:58.424 [2024-12-06 06:42:17.038239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:58.683 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.683 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:58.683 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:58.683 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.683 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:58.683 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.683 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.683 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.683 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:58.683 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.684 BaseBdev2 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.684 [ 00:15:58.684 { 00:15:58.684 "name": "BaseBdev2", 00:15:58.684 "aliases": [ 00:15:58.684 "b328597a-e148-4e05-9c4c-de7f1ed95d77" 00:15:58.684 ], 00:15:58.684 "product_name": "Malloc disk", 00:15:58.684 "block_size": 512, 00:15:58.684 "num_blocks": 65536, 00:15:58.684 "uuid": "b328597a-e148-4e05-9c4c-de7f1ed95d77", 00:15:58.684 "assigned_rate_limits": { 00:15:58.684 "rw_ios_per_sec": 0, 00:15:58.684 "rw_mbytes_per_sec": 0, 00:15:58.684 "r_mbytes_per_sec": 0, 00:15:58.684 "w_mbytes_per_sec": 0 00:15:58.684 }, 00:15:58.684 "claimed": false, 00:15:58.684 "zoned": false, 00:15:58.684 "supported_io_types": { 00:15:58.684 "read": true, 00:15:58.684 "write": true, 00:15:58.684 "unmap": true, 00:15:58.684 "flush": true, 00:15:58.684 "reset": true, 00:15:58.684 "nvme_admin": false, 00:15:58.684 "nvme_io": false, 00:15:58.684 "nvme_io_md": false, 00:15:58.684 "write_zeroes": true, 00:15:58.684 "zcopy": true, 00:15:58.684 "get_zone_info": false, 00:15:58.684 "zone_management": false, 00:15:58.684 "zone_append": false, 00:15:58.684 "compare": false, 00:15:58.684 "compare_and_write": false, 00:15:58.684 "abort": true, 00:15:58.684 "seek_hole": false, 00:15:58.684 "seek_data": false, 00:15:58.684 "copy": true, 00:15:58.684 "nvme_iov_md": false 00:15:58.684 }, 00:15:58.684 "memory_domains": [ 00:15:58.684 { 00:15:58.684 "dma_device_id": "system", 00:15:58.684 "dma_device_type": 1 00:15:58.684 }, 00:15:58.684 { 00:15:58.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.684 "dma_device_type": 2 00:15:58.684 } 00:15:58.684 ], 00:15:58.684 "driver_specific": {} 00:15:58.684 } 00:15:58.684 ] 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.684 BaseBdev3 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.684 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.944 [ 00:15:58.944 { 00:15:58.944 "name": "BaseBdev3", 00:15:58.944 "aliases": [ 00:15:58.944 "23971170-dde1-4958-aa72-30de2595c6c7" 00:15:58.944 ], 00:15:58.944 "product_name": "Malloc disk", 00:15:58.944 "block_size": 512, 00:15:58.944 "num_blocks": 65536, 00:15:58.944 "uuid": "23971170-dde1-4958-aa72-30de2595c6c7", 00:15:58.944 "assigned_rate_limits": { 00:15:58.944 "rw_ios_per_sec": 0, 00:15:58.944 "rw_mbytes_per_sec": 0, 00:15:58.944 "r_mbytes_per_sec": 0, 00:15:58.944 "w_mbytes_per_sec": 0 00:15:58.944 }, 00:15:58.944 "claimed": false, 00:15:58.944 "zoned": false, 00:15:58.944 "supported_io_types": { 00:15:58.944 "read": true, 00:15:58.944 "write": true, 00:15:58.944 "unmap": true, 00:15:58.944 "flush": true, 00:15:58.944 "reset": true, 00:15:58.944 "nvme_admin": false, 00:15:58.944 "nvme_io": false, 00:15:58.944 "nvme_io_md": false, 00:15:58.944 "write_zeroes": true, 00:15:58.944 "zcopy": true, 00:15:58.944 "get_zone_info": false, 00:15:58.944 "zone_management": false, 00:15:58.944 "zone_append": false, 00:15:58.944 "compare": false, 00:15:58.944 "compare_and_write": false, 00:15:58.944 "abort": true, 00:15:58.944 "seek_hole": false, 00:15:58.944 "seek_data": false, 00:15:58.944 "copy": true, 00:15:58.944 "nvme_iov_md": false 00:15:58.944 }, 00:15:58.944 "memory_domains": [ 00:15:58.944 { 00:15:58.944 "dma_device_id": "system", 00:15:58.944 "dma_device_type": 1 00:15:58.944 }, 00:15:58.944 { 00:15:58.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.944 "dma_device_type": 2 00:15:58.944 } 00:15:58.944 ], 00:15:58.944 "driver_specific": {} 00:15:58.944 } 00:15:58.944 ] 00:15:58.944 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.944 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:58.944 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:58.944 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:58.944 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:58.944 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.944 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.944 BaseBdev4 00:15:58.944 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.944 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:58.944 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:58.944 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:58.944 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:58.944 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:58.944 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:58.944 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:58.944 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.944 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.944 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.944 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:58.944 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.944 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.944 [ 00:15:58.944 { 00:15:58.944 "name": "BaseBdev4", 00:15:58.944 "aliases": [ 00:15:58.944 "d1c05aeb-9b1d-4e8f-8376-862bd0cf941e" 00:15:58.944 ], 00:15:58.944 "product_name": "Malloc disk", 00:15:58.944 "block_size": 512, 00:15:58.944 "num_blocks": 65536, 00:15:58.944 "uuid": "d1c05aeb-9b1d-4e8f-8376-862bd0cf941e", 00:15:58.944 "assigned_rate_limits": { 00:15:58.944 "rw_ios_per_sec": 0, 00:15:58.944 "rw_mbytes_per_sec": 0, 00:15:58.944 "r_mbytes_per_sec": 0, 00:15:58.944 "w_mbytes_per_sec": 0 00:15:58.944 }, 00:15:58.944 "claimed": false, 00:15:58.944 "zoned": false, 00:15:58.944 "supported_io_types": { 00:15:58.944 "read": true, 00:15:58.944 "write": true, 00:15:58.944 "unmap": true, 00:15:58.944 "flush": true, 00:15:58.944 "reset": true, 00:15:58.944 "nvme_admin": false, 00:15:58.944 "nvme_io": false, 00:15:58.944 "nvme_io_md": false, 00:15:58.944 "write_zeroes": true, 00:15:58.944 "zcopy": true, 00:15:58.944 "get_zone_info": false, 00:15:58.944 "zone_management": false, 00:15:58.944 "zone_append": false, 00:15:58.944 "compare": false, 00:15:58.944 "compare_and_write": false, 00:15:58.944 "abort": true, 00:15:58.944 "seek_hole": false, 00:15:58.944 "seek_data": false, 00:15:58.944 "copy": true, 00:15:58.944 "nvme_iov_md": false 00:15:58.944 }, 00:15:58.944 "memory_domains": [ 00:15:58.944 { 00:15:58.944 "dma_device_id": "system", 00:15:58.944 "dma_device_type": 1 00:15:58.944 }, 00:15:58.944 { 00:15:58.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.944 "dma_device_type": 2 00:15:58.944 } 00:15:58.944 ], 00:15:58.944 "driver_specific": {} 00:15:58.944 } 00:15:58.944 ] 00:15:58.944 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.944 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:58.944 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:58.944 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:58.944 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:58.945 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.945 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.945 [2024-12-06 06:42:17.424935] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:58.945 [2024-12-06 06:42:17.425114] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:58.945 [2024-12-06 06:42:17.425252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:58.945 [2024-12-06 06:42:17.428380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:58.945 [2024-12-06 06:42:17.428607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:58.945 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.945 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:58.945 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.945 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.945 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:58.945 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.945 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:58.945 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.945 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.945 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.945 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.945 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.945 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.945 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.945 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.945 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.945 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.945 "name": "Existed_Raid", 00:15:58.945 "uuid": "e1b13762-be36-4d79-ae04-47ff1cc65aca", 00:15:58.945 "strip_size_kb": 64, 00:15:58.945 "state": "configuring", 00:15:58.945 "raid_level": "concat", 00:15:58.945 "superblock": true, 00:15:58.945 "num_base_bdevs": 4, 00:15:58.945 "num_base_bdevs_discovered": 3, 00:15:58.945 "num_base_bdevs_operational": 4, 00:15:58.945 "base_bdevs_list": [ 00:15:58.945 { 00:15:58.945 "name": "BaseBdev1", 00:15:58.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.945 "is_configured": false, 00:15:58.945 "data_offset": 0, 00:15:58.945 "data_size": 0 00:15:58.945 }, 00:15:58.945 { 00:15:58.945 "name": "BaseBdev2", 00:15:58.945 "uuid": "b328597a-e148-4e05-9c4c-de7f1ed95d77", 00:15:58.945 "is_configured": true, 00:15:58.945 "data_offset": 2048, 00:15:58.945 "data_size": 63488 00:15:58.945 }, 00:15:58.945 { 00:15:58.945 "name": "BaseBdev3", 00:15:58.945 "uuid": "23971170-dde1-4958-aa72-30de2595c6c7", 00:15:58.945 "is_configured": true, 00:15:58.945 "data_offset": 2048, 00:15:58.945 "data_size": 63488 00:15:58.945 }, 00:15:58.945 { 00:15:58.945 "name": "BaseBdev4", 00:15:58.945 "uuid": "d1c05aeb-9b1d-4e8f-8376-862bd0cf941e", 00:15:58.945 "is_configured": true, 00:15:58.945 "data_offset": 2048, 00:15:58.945 "data_size": 63488 00:15:58.945 } 00:15:58.945 ] 00:15:58.945 }' 00:15:58.945 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.945 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.513 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:59.513 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.513 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.513 [2024-12-06 06:42:17.989143] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:59.513 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.513 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:59.513 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:59.513 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:59.513 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:59.513 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.513 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:59.513 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.513 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.513 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.513 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.513 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.513 06:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.513 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.513 06:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.513 06:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.513 06:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.513 "name": "Existed_Raid", 00:15:59.513 "uuid": "e1b13762-be36-4d79-ae04-47ff1cc65aca", 00:15:59.513 "strip_size_kb": 64, 00:15:59.513 "state": "configuring", 00:15:59.513 "raid_level": "concat", 00:15:59.513 "superblock": true, 00:15:59.513 "num_base_bdevs": 4, 00:15:59.513 "num_base_bdevs_discovered": 2, 00:15:59.513 "num_base_bdevs_operational": 4, 00:15:59.513 "base_bdevs_list": [ 00:15:59.513 { 00:15:59.513 "name": "BaseBdev1", 00:15:59.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.513 "is_configured": false, 00:15:59.513 "data_offset": 0, 00:15:59.513 "data_size": 0 00:15:59.513 }, 00:15:59.513 { 00:15:59.513 "name": null, 00:15:59.513 "uuid": "b328597a-e148-4e05-9c4c-de7f1ed95d77", 00:15:59.513 "is_configured": false, 00:15:59.513 "data_offset": 0, 00:15:59.513 "data_size": 63488 00:15:59.513 }, 00:15:59.513 { 00:15:59.513 "name": "BaseBdev3", 00:15:59.513 "uuid": "23971170-dde1-4958-aa72-30de2595c6c7", 00:15:59.513 "is_configured": true, 00:15:59.513 "data_offset": 2048, 00:15:59.513 "data_size": 63488 00:15:59.513 }, 00:15:59.513 { 00:15:59.513 "name": "BaseBdev4", 00:15:59.513 "uuid": "d1c05aeb-9b1d-4e8f-8376-862bd0cf941e", 00:15:59.513 "is_configured": true, 00:15:59.513 "data_offset": 2048, 00:15:59.513 "data_size": 63488 00:15:59.513 } 00:15:59.513 ] 00:15:59.513 }' 00:15:59.513 06:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.513 06:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.082 [2024-12-06 06:42:18.613745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:00.082 BaseBdev1 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.082 [ 00:16:00.082 { 00:16:00.082 "name": "BaseBdev1", 00:16:00.082 "aliases": [ 00:16:00.082 "3f5c3d9b-0aa5-4deb-9aad-9256fe6db490" 00:16:00.082 ], 00:16:00.082 "product_name": "Malloc disk", 00:16:00.082 "block_size": 512, 00:16:00.082 "num_blocks": 65536, 00:16:00.082 "uuid": "3f5c3d9b-0aa5-4deb-9aad-9256fe6db490", 00:16:00.082 "assigned_rate_limits": { 00:16:00.082 "rw_ios_per_sec": 0, 00:16:00.082 "rw_mbytes_per_sec": 0, 00:16:00.082 "r_mbytes_per_sec": 0, 00:16:00.082 "w_mbytes_per_sec": 0 00:16:00.082 }, 00:16:00.082 "claimed": true, 00:16:00.082 "claim_type": "exclusive_write", 00:16:00.082 "zoned": false, 00:16:00.082 "supported_io_types": { 00:16:00.082 "read": true, 00:16:00.082 "write": true, 00:16:00.082 "unmap": true, 00:16:00.082 "flush": true, 00:16:00.082 "reset": true, 00:16:00.082 "nvme_admin": false, 00:16:00.082 "nvme_io": false, 00:16:00.082 "nvme_io_md": false, 00:16:00.082 "write_zeroes": true, 00:16:00.082 "zcopy": true, 00:16:00.082 "get_zone_info": false, 00:16:00.082 "zone_management": false, 00:16:00.082 "zone_append": false, 00:16:00.082 "compare": false, 00:16:00.082 "compare_and_write": false, 00:16:00.082 "abort": true, 00:16:00.082 "seek_hole": false, 00:16:00.082 "seek_data": false, 00:16:00.082 "copy": true, 00:16:00.082 "nvme_iov_md": false 00:16:00.082 }, 00:16:00.082 "memory_domains": [ 00:16:00.082 { 00:16:00.082 "dma_device_id": "system", 00:16:00.082 "dma_device_type": 1 00:16:00.082 }, 00:16:00.082 { 00:16:00.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.082 "dma_device_type": 2 00:16:00.082 } 00:16:00.082 ], 00:16:00.082 "driver_specific": {} 00:16:00.082 } 00:16:00.082 ] 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.082 06:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.082 "name": "Existed_Raid", 00:16:00.082 "uuid": "e1b13762-be36-4d79-ae04-47ff1cc65aca", 00:16:00.082 "strip_size_kb": 64, 00:16:00.082 "state": "configuring", 00:16:00.082 "raid_level": "concat", 00:16:00.082 "superblock": true, 00:16:00.082 "num_base_bdevs": 4, 00:16:00.082 "num_base_bdevs_discovered": 3, 00:16:00.082 "num_base_bdevs_operational": 4, 00:16:00.083 "base_bdevs_list": [ 00:16:00.083 { 00:16:00.083 "name": "BaseBdev1", 00:16:00.083 "uuid": "3f5c3d9b-0aa5-4deb-9aad-9256fe6db490", 00:16:00.083 "is_configured": true, 00:16:00.083 "data_offset": 2048, 00:16:00.083 "data_size": 63488 00:16:00.083 }, 00:16:00.083 { 00:16:00.083 "name": null, 00:16:00.083 "uuid": "b328597a-e148-4e05-9c4c-de7f1ed95d77", 00:16:00.083 "is_configured": false, 00:16:00.083 "data_offset": 0, 00:16:00.083 "data_size": 63488 00:16:00.083 }, 00:16:00.083 { 00:16:00.083 "name": "BaseBdev3", 00:16:00.083 "uuid": "23971170-dde1-4958-aa72-30de2595c6c7", 00:16:00.083 "is_configured": true, 00:16:00.083 "data_offset": 2048, 00:16:00.083 "data_size": 63488 00:16:00.083 }, 00:16:00.083 { 00:16:00.083 "name": "BaseBdev4", 00:16:00.083 "uuid": "d1c05aeb-9b1d-4e8f-8376-862bd0cf941e", 00:16:00.083 "is_configured": true, 00:16:00.083 "data_offset": 2048, 00:16:00.083 "data_size": 63488 00:16:00.083 } 00:16:00.083 ] 00:16:00.083 }' 00:16:00.083 06:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.083 06:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.649 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.649 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:00.649 06:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.649 06:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.649 06:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.649 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:00.649 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:00.649 06:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.649 06:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.649 [2024-12-06 06:42:19.210030] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:00.649 06:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.649 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:00.649 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.649 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.649 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:00.649 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.649 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:00.649 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.649 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.649 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.649 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.649 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.649 06:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.649 06:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.649 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.649 06:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.649 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.649 "name": "Existed_Raid", 00:16:00.649 "uuid": "e1b13762-be36-4d79-ae04-47ff1cc65aca", 00:16:00.649 "strip_size_kb": 64, 00:16:00.649 "state": "configuring", 00:16:00.649 "raid_level": "concat", 00:16:00.649 "superblock": true, 00:16:00.649 "num_base_bdevs": 4, 00:16:00.649 "num_base_bdevs_discovered": 2, 00:16:00.649 "num_base_bdevs_operational": 4, 00:16:00.649 "base_bdevs_list": [ 00:16:00.649 { 00:16:00.649 "name": "BaseBdev1", 00:16:00.649 "uuid": "3f5c3d9b-0aa5-4deb-9aad-9256fe6db490", 00:16:00.649 "is_configured": true, 00:16:00.649 "data_offset": 2048, 00:16:00.649 "data_size": 63488 00:16:00.649 }, 00:16:00.649 { 00:16:00.649 "name": null, 00:16:00.649 "uuid": "b328597a-e148-4e05-9c4c-de7f1ed95d77", 00:16:00.649 "is_configured": false, 00:16:00.649 "data_offset": 0, 00:16:00.649 "data_size": 63488 00:16:00.649 }, 00:16:00.649 { 00:16:00.649 "name": null, 00:16:00.649 "uuid": "23971170-dde1-4958-aa72-30de2595c6c7", 00:16:00.649 "is_configured": false, 00:16:00.649 "data_offset": 0, 00:16:00.649 "data_size": 63488 00:16:00.649 }, 00:16:00.649 { 00:16:00.649 "name": "BaseBdev4", 00:16:00.649 "uuid": "d1c05aeb-9b1d-4e8f-8376-862bd0cf941e", 00:16:00.649 "is_configured": true, 00:16:00.649 "data_offset": 2048, 00:16:00.649 "data_size": 63488 00:16:00.649 } 00:16:00.649 ] 00:16:00.649 }' 00:16:00.649 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.649 06:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.216 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:01.216 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.216 06:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.216 06:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.216 06:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.216 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:01.216 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:01.216 06:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.216 06:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.216 [2024-12-06 06:42:19.794154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:01.216 06:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.216 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:01.216 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:01.216 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:01.216 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:01.216 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.216 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:01.216 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.216 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.216 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.216 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.216 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.216 06:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.216 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.216 06:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.216 06:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.216 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.216 "name": "Existed_Raid", 00:16:01.216 "uuid": "e1b13762-be36-4d79-ae04-47ff1cc65aca", 00:16:01.216 "strip_size_kb": 64, 00:16:01.216 "state": "configuring", 00:16:01.216 "raid_level": "concat", 00:16:01.216 "superblock": true, 00:16:01.216 "num_base_bdevs": 4, 00:16:01.216 "num_base_bdevs_discovered": 3, 00:16:01.216 "num_base_bdevs_operational": 4, 00:16:01.216 "base_bdevs_list": [ 00:16:01.216 { 00:16:01.217 "name": "BaseBdev1", 00:16:01.217 "uuid": "3f5c3d9b-0aa5-4deb-9aad-9256fe6db490", 00:16:01.217 "is_configured": true, 00:16:01.217 "data_offset": 2048, 00:16:01.217 "data_size": 63488 00:16:01.217 }, 00:16:01.217 { 00:16:01.217 "name": null, 00:16:01.217 "uuid": "b328597a-e148-4e05-9c4c-de7f1ed95d77", 00:16:01.217 "is_configured": false, 00:16:01.217 "data_offset": 0, 00:16:01.217 "data_size": 63488 00:16:01.217 }, 00:16:01.217 { 00:16:01.217 "name": "BaseBdev3", 00:16:01.217 "uuid": "23971170-dde1-4958-aa72-30de2595c6c7", 00:16:01.217 "is_configured": true, 00:16:01.217 "data_offset": 2048, 00:16:01.217 "data_size": 63488 00:16:01.217 }, 00:16:01.217 { 00:16:01.217 "name": "BaseBdev4", 00:16:01.217 "uuid": "d1c05aeb-9b1d-4e8f-8376-862bd0cf941e", 00:16:01.217 "is_configured": true, 00:16:01.217 "data_offset": 2048, 00:16:01.217 "data_size": 63488 00:16:01.217 } 00:16:01.217 ] 00:16:01.217 }' 00:16:01.217 06:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.217 06:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.784 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.784 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:01.784 06:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.784 06:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.784 06:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.784 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:01.784 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:01.784 06:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.784 06:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.784 [2024-12-06 06:42:20.370403] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:02.042 06:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.042 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:02.042 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.042 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.042 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:02.042 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.042 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.042 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.042 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.042 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.042 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.042 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.042 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.042 06:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.042 06:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.042 06:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.042 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.042 "name": "Existed_Raid", 00:16:02.042 "uuid": "e1b13762-be36-4d79-ae04-47ff1cc65aca", 00:16:02.042 "strip_size_kb": 64, 00:16:02.042 "state": "configuring", 00:16:02.042 "raid_level": "concat", 00:16:02.042 "superblock": true, 00:16:02.042 "num_base_bdevs": 4, 00:16:02.042 "num_base_bdevs_discovered": 2, 00:16:02.042 "num_base_bdevs_operational": 4, 00:16:02.042 "base_bdevs_list": [ 00:16:02.043 { 00:16:02.043 "name": null, 00:16:02.043 "uuid": "3f5c3d9b-0aa5-4deb-9aad-9256fe6db490", 00:16:02.043 "is_configured": false, 00:16:02.043 "data_offset": 0, 00:16:02.043 "data_size": 63488 00:16:02.043 }, 00:16:02.043 { 00:16:02.043 "name": null, 00:16:02.043 "uuid": "b328597a-e148-4e05-9c4c-de7f1ed95d77", 00:16:02.043 "is_configured": false, 00:16:02.043 "data_offset": 0, 00:16:02.043 "data_size": 63488 00:16:02.043 }, 00:16:02.043 { 00:16:02.043 "name": "BaseBdev3", 00:16:02.043 "uuid": "23971170-dde1-4958-aa72-30de2595c6c7", 00:16:02.043 "is_configured": true, 00:16:02.043 "data_offset": 2048, 00:16:02.043 "data_size": 63488 00:16:02.043 }, 00:16:02.043 { 00:16:02.043 "name": "BaseBdev4", 00:16:02.043 "uuid": "d1c05aeb-9b1d-4e8f-8376-862bd0cf941e", 00:16:02.043 "is_configured": true, 00:16:02.043 "data_offset": 2048, 00:16:02.043 "data_size": 63488 00:16:02.043 } 00:16:02.043 ] 00:16:02.043 }' 00:16:02.043 06:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.043 06:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.610 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.610 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.610 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.610 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:02.610 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.610 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:02.610 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:02.610 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.610 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.610 [2024-12-06 06:42:21.076737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:02.610 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.610 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:02.610 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.610 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.610 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:02.610 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.610 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.610 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.610 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.610 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.610 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.610 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.610 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.610 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.610 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.610 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.610 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.610 "name": "Existed_Raid", 00:16:02.610 "uuid": "e1b13762-be36-4d79-ae04-47ff1cc65aca", 00:16:02.610 "strip_size_kb": 64, 00:16:02.610 "state": "configuring", 00:16:02.610 "raid_level": "concat", 00:16:02.610 "superblock": true, 00:16:02.610 "num_base_bdevs": 4, 00:16:02.610 "num_base_bdevs_discovered": 3, 00:16:02.610 "num_base_bdevs_operational": 4, 00:16:02.610 "base_bdevs_list": [ 00:16:02.610 { 00:16:02.610 "name": null, 00:16:02.610 "uuid": "3f5c3d9b-0aa5-4deb-9aad-9256fe6db490", 00:16:02.610 "is_configured": false, 00:16:02.610 "data_offset": 0, 00:16:02.610 "data_size": 63488 00:16:02.610 }, 00:16:02.610 { 00:16:02.610 "name": "BaseBdev2", 00:16:02.610 "uuid": "b328597a-e148-4e05-9c4c-de7f1ed95d77", 00:16:02.610 "is_configured": true, 00:16:02.610 "data_offset": 2048, 00:16:02.610 "data_size": 63488 00:16:02.610 }, 00:16:02.610 { 00:16:02.610 "name": "BaseBdev3", 00:16:02.610 "uuid": "23971170-dde1-4958-aa72-30de2595c6c7", 00:16:02.610 "is_configured": true, 00:16:02.610 "data_offset": 2048, 00:16:02.610 "data_size": 63488 00:16:02.610 }, 00:16:02.610 { 00:16:02.610 "name": "BaseBdev4", 00:16:02.610 "uuid": "d1c05aeb-9b1d-4e8f-8376-862bd0cf941e", 00:16:02.610 "is_configured": true, 00:16:02.610 "data_offset": 2048, 00:16:02.610 "data_size": 63488 00:16:02.610 } 00:16:02.610 ] 00:16:02.610 }' 00:16:02.610 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.610 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3f5c3d9b-0aa5-4deb-9aad-9256fe6db490 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.178 [2024-12-06 06:42:21.739763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:03.178 [2024-12-06 06:42:21.740209] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:03.178 [2024-12-06 06:42:21.740229] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:03.178 NewBaseBdev 00:16:03.178 [2024-12-06 06:42:21.740580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:03.178 [2024-12-06 06:42:21.740800] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:03.178 [2024-12-06 06:42:21.740825] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:03.178 [2024-12-06 06:42:21.741021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.178 [ 00:16:03.178 { 00:16:03.178 "name": "NewBaseBdev", 00:16:03.178 "aliases": [ 00:16:03.178 "3f5c3d9b-0aa5-4deb-9aad-9256fe6db490" 00:16:03.178 ], 00:16:03.178 "product_name": "Malloc disk", 00:16:03.178 "block_size": 512, 00:16:03.178 "num_blocks": 65536, 00:16:03.178 "uuid": "3f5c3d9b-0aa5-4deb-9aad-9256fe6db490", 00:16:03.178 "assigned_rate_limits": { 00:16:03.178 "rw_ios_per_sec": 0, 00:16:03.178 "rw_mbytes_per_sec": 0, 00:16:03.178 "r_mbytes_per_sec": 0, 00:16:03.178 "w_mbytes_per_sec": 0 00:16:03.178 }, 00:16:03.178 "claimed": true, 00:16:03.178 "claim_type": "exclusive_write", 00:16:03.178 "zoned": false, 00:16:03.178 "supported_io_types": { 00:16:03.178 "read": true, 00:16:03.178 "write": true, 00:16:03.178 "unmap": true, 00:16:03.178 "flush": true, 00:16:03.178 "reset": true, 00:16:03.178 "nvme_admin": false, 00:16:03.178 "nvme_io": false, 00:16:03.178 "nvme_io_md": false, 00:16:03.178 "write_zeroes": true, 00:16:03.178 "zcopy": true, 00:16:03.178 "get_zone_info": false, 00:16:03.178 "zone_management": false, 00:16:03.178 "zone_append": false, 00:16:03.178 "compare": false, 00:16:03.178 "compare_and_write": false, 00:16:03.178 "abort": true, 00:16:03.178 "seek_hole": false, 00:16:03.178 "seek_data": false, 00:16:03.178 "copy": true, 00:16:03.178 "nvme_iov_md": false 00:16:03.178 }, 00:16:03.178 "memory_domains": [ 00:16:03.178 { 00:16:03.178 "dma_device_id": "system", 00:16:03.178 "dma_device_type": 1 00:16:03.178 }, 00:16:03.178 { 00:16:03.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.178 "dma_device_type": 2 00:16:03.178 } 00:16:03.178 ], 00:16:03.178 "driver_specific": {} 00:16:03.178 } 00:16:03.178 ] 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.178 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.436 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.436 "name": "Existed_Raid", 00:16:03.436 "uuid": "e1b13762-be36-4d79-ae04-47ff1cc65aca", 00:16:03.436 "strip_size_kb": 64, 00:16:03.436 "state": "online", 00:16:03.436 "raid_level": "concat", 00:16:03.436 "superblock": true, 00:16:03.436 "num_base_bdevs": 4, 00:16:03.436 "num_base_bdevs_discovered": 4, 00:16:03.436 "num_base_bdevs_operational": 4, 00:16:03.436 "base_bdevs_list": [ 00:16:03.436 { 00:16:03.436 "name": "NewBaseBdev", 00:16:03.436 "uuid": "3f5c3d9b-0aa5-4deb-9aad-9256fe6db490", 00:16:03.436 "is_configured": true, 00:16:03.436 "data_offset": 2048, 00:16:03.436 "data_size": 63488 00:16:03.436 }, 00:16:03.436 { 00:16:03.436 "name": "BaseBdev2", 00:16:03.436 "uuid": "b328597a-e148-4e05-9c4c-de7f1ed95d77", 00:16:03.436 "is_configured": true, 00:16:03.436 "data_offset": 2048, 00:16:03.436 "data_size": 63488 00:16:03.436 }, 00:16:03.436 { 00:16:03.436 "name": "BaseBdev3", 00:16:03.436 "uuid": "23971170-dde1-4958-aa72-30de2595c6c7", 00:16:03.436 "is_configured": true, 00:16:03.436 "data_offset": 2048, 00:16:03.437 "data_size": 63488 00:16:03.437 }, 00:16:03.437 { 00:16:03.437 "name": "BaseBdev4", 00:16:03.437 "uuid": "d1c05aeb-9b1d-4e8f-8376-862bd0cf941e", 00:16:03.437 "is_configured": true, 00:16:03.437 "data_offset": 2048, 00:16:03.437 "data_size": 63488 00:16:03.437 } 00:16:03.437 ] 00:16:03.437 }' 00:16:03.437 06:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.437 06:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.695 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:03.695 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:03.695 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:03.695 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:03.695 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:03.695 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:03.695 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:03.695 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.695 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.695 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:03.695 [2024-12-06 06:42:22.328406] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.954 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.954 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:03.954 "name": "Existed_Raid", 00:16:03.954 "aliases": [ 00:16:03.954 "e1b13762-be36-4d79-ae04-47ff1cc65aca" 00:16:03.954 ], 00:16:03.954 "product_name": "Raid Volume", 00:16:03.954 "block_size": 512, 00:16:03.954 "num_blocks": 253952, 00:16:03.954 "uuid": "e1b13762-be36-4d79-ae04-47ff1cc65aca", 00:16:03.954 "assigned_rate_limits": { 00:16:03.954 "rw_ios_per_sec": 0, 00:16:03.954 "rw_mbytes_per_sec": 0, 00:16:03.954 "r_mbytes_per_sec": 0, 00:16:03.954 "w_mbytes_per_sec": 0 00:16:03.954 }, 00:16:03.954 "claimed": false, 00:16:03.954 "zoned": false, 00:16:03.954 "supported_io_types": { 00:16:03.954 "read": true, 00:16:03.954 "write": true, 00:16:03.954 "unmap": true, 00:16:03.954 "flush": true, 00:16:03.954 "reset": true, 00:16:03.954 "nvme_admin": false, 00:16:03.954 "nvme_io": false, 00:16:03.954 "nvme_io_md": false, 00:16:03.954 "write_zeroes": true, 00:16:03.954 "zcopy": false, 00:16:03.954 "get_zone_info": false, 00:16:03.954 "zone_management": false, 00:16:03.954 "zone_append": false, 00:16:03.954 "compare": false, 00:16:03.954 "compare_and_write": false, 00:16:03.954 "abort": false, 00:16:03.954 "seek_hole": false, 00:16:03.954 "seek_data": false, 00:16:03.954 "copy": false, 00:16:03.954 "nvme_iov_md": false 00:16:03.954 }, 00:16:03.954 "memory_domains": [ 00:16:03.954 { 00:16:03.954 "dma_device_id": "system", 00:16:03.954 "dma_device_type": 1 00:16:03.954 }, 00:16:03.954 { 00:16:03.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.954 "dma_device_type": 2 00:16:03.954 }, 00:16:03.954 { 00:16:03.954 "dma_device_id": "system", 00:16:03.954 "dma_device_type": 1 00:16:03.954 }, 00:16:03.954 { 00:16:03.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.954 "dma_device_type": 2 00:16:03.954 }, 00:16:03.954 { 00:16:03.954 "dma_device_id": "system", 00:16:03.954 "dma_device_type": 1 00:16:03.954 }, 00:16:03.954 { 00:16:03.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.954 "dma_device_type": 2 00:16:03.954 }, 00:16:03.954 { 00:16:03.954 "dma_device_id": "system", 00:16:03.954 "dma_device_type": 1 00:16:03.954 }, 00:16:03.954 { 00:16:03.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.954 "dma_device_type": 2 00:16:03.954 } 00:16:03.954 ], 00:16:03.954 "driver_specific": { 00:16:03.954 "raid": { 00:16:03.954 "uuid": "e1b13762-be36-4d79-ae04-47ff1cc65aca", 00:16:03.954 "strip_size_kb": 64, 00:16:03.954 "state": "online", 00:16:03.954 "raid_level": "concat", 00:16:03.954 "superblock": true, 00:16:03.954 "num_base_bdevs": 4, 00:16:03.954 "num_base_bdevs_discovered": 4, 00:16:03.954 "num_base_bdevs_operational": 4, 00:16:03.954 "base_bdevs_list": [ 00:16:03.954 { 00:16:03.954 "name": "NewBaseBdev", 00:16:03.954 "uuid": "3f5c3d9b-0aa5-4deb-9aad-9256fe6db490", 00:16:03.954 "is_configured": true, 00:16:03.954 "data_offset": 2048, 00:16:03.954 "data_size": 63488 00:16:03.954 }, 00:16:03.954 { 00:16:03.954 "name": "BaseBdev2", 00:16:03.954 "uuid": "b328597a-e148-4e05-9c4c-de7f1ed95d77", 00:16:03.954 "is_configured": true, 00:16:03.954 "data_offset": 2048, 00:16:03.954 "data_size": 63488 00:16:03.954 }, 00:16:03.954 { 00:16:03.954 "name": "BaseBdev3", 00:16:03.954 "uuid": "23971170-dde1-4958-aa72-30de2595c6c7", 00:16:03.954 "is_configured": true, 00:16:03.954 "data_offset": 2048, 00:16:03.954 "data_size": 63488 00:16:03.954 }, 00:16:03.954 { 00:16:03.954 "name": "BaseBdev4", 00:16:03.954 "uuid": "d1c05aeb-9b1d-4e8f-8376-862bd0cf941e", 00:16:03.954 "is_configured": true, 00:16:03.955 "data_offset": 2048, 00:16:03.955 "data_size": 63488 00:16:03.955 } 00:16:03.955 ] 00:16:03.955 } 00:16:03.955 } 00:16:03.955 }' 00:16:03.955 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:03.955 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:03.955 BaseBdev2 00:16:03.955 BaseBdev3 00:16:03.955 BaseBdev4' 00:16:03.955 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.955 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:03.955 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.955 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:03.955 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.955 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.955 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.955 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.955 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.955 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.955 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.955 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:03.955 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.955 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.955 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.955 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.955 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.955 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.955 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.212 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:04.212 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.212 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.212 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.213 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.213 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.213 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.213 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.213 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:04.213 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.213 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.213 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.213 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.213 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.213 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.213 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:04.213 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.213 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.213 [2024-12-06 06:42:22.708228] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:04.213 [2024-12-06 06:42:22.708277] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:04.213 [2024-12-06 06:42:22.708388] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:04.213 [2024-12-06 06:42:22.708502] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:04.213 [2024-12-06 06:42:22.708537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:04.213 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.213 06:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72256 00:16:04.213 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72256 ']' 00:16:04.213 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72256 00:16:04.213 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:04.213 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:04.213 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72256 00:16:04.213 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:04.213 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:04.213 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72256' 00:16:04.213 killing process with pid 72256 00:16:04.213 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72256 00:16:04.213 [2024-12-06 06:42:22.746403] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:04.213 06:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72256 00:16:04.471 [2024-12-06 06:42:23.106162] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:05.883 06:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:05.883 00:16:05.883 real 0m12.912s 00:16:05.883 user 0m21.406s 00:16:05.883 sys 0m1.710s 00:16:05.883 06:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:05.884 06:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.884 ************************************ 00:16:05.884 END TEST raid_state_function_test_sb 00:16:05.884 ************************************ 00:16:05.884 06:42:24 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:16:05.884 06:42:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:05.884 06:42:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:05.884 06:42:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:05.884 ************************************ 00:16:05.884 START TEST raid_superblock_test 00:16:05.884 ************************************ 00:16:05.884 06:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:16:05.884 06:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:16:05.884 06:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:05.884 06:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:05.884 06:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:05.884 06:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:05.884 06:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:05.884 06:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:05.884 06:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:05.884 06:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:05.884 06:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:05.884 06:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:05.884 06:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:05.884 06:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:05.884 06:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:16:05.884 06:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:05.884 06:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:05.884 06:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72943 00:16:05.884 06:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72943 00:16:05.884 06:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:05.884 06:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72943 ']' 00:16:05.884 06:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.884 06:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:05.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.885 06:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.885 06:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:05.885 06:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.885 [2024-12-06 06:42:24.330869] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:16:05.885 [2024-12-06 06:42:24.331489] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72943 ] 00:16:06.183 [2024-12-06 06:42:24.518168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.183 [2024-12-06 06:42:24.672898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.440 [2024-12-06 06:42:24.888418] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:06.440 [2024-12-06 06:42:24.888492] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.007 malloc1 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.007 [2024-12-06 06:42:25.434500] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:07.007 [2024-12-06 06:42:25.435700] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.007 [2024-12-06 06:42:25.435744] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:07.007 [2024-12-06 06:42:25.435762] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.007 [2024-12-06 06:42:25.438555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.007 [2024-12-06 06:42:25.438605] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:07.007 pt1 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.007 malloc2 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.007 [2024-12-06 06:42:25.491259] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:07.007 [2024-12-06 06:42:25.491329] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.007 [2024-12-06 06:42:25.491383] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:07.007 [2024-12-06 06:42:25.491399] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.007 [2024-12-06 06:42:25.494279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.007 [2024-12-06 06:42:25.494465] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:07.007 pt2 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.007 malloc3 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.007 [2024-12-06 06:42:25.559335] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:07.007 [2024-12-06 06:42:25.559403] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.007 [2024-12-06 06:42:25.559436] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:07.007 [2024-12-06 06:42:25.559452] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.007 [2024-12-06 06:42:25.562229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.007 [2024-12-06 06:42:25.562397] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:07.007 pt3 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.007 malloc4 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.007 [2024-12-06 06:42:25.615175] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:07.007 [2024-12-06 06:42:25.615248] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.007 [2024-12-06 06:42:25.615280] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:07.007 [2024-12-06 06:42:25.615295] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.007 [2024-12-06 06:42:25.618030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.007 [2024-12-06 06:42:25.618075] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:07.007 pt4 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.007 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.008 [2024-12-06 06:42:25.627227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:07.008 [2024-12-06 06:42:25.629628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:07.008 [2024-12-06 06:42:25.629749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:07.008 [2024-12-06 06:42:25.629826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:07.008 [2024-12-06 06:42:25.630068] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:07.008 [2024-12-06 06:42:25.630086] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:07.008 [2024-12-06 06:42:25.630401] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:07.008 [2024-12-06 06:42:25.630636] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:07.008 [2024-12-06 06:42:25.630658] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:07.008 [2024-12-06 06:42:25.630834] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.008 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.008 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:07.008 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.008 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.008 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:07.008 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.008 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:07.008 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.008 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.008 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.008 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.008 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.008 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.008 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.008 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.265 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.265 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.265 "name": "raid_bdev1", 00:16:07.265 "uuid": "d583a639-25fe-4868-a3a9-61b905dfa6e6", 00:16:07.265 "strip_size_kb": 64, 00:16:07.265 "state": "online", 00:16:07.265 "raid_level": "concat", 00:16:07.265 "superblock": true, 00:16:07.265 "num_base_bdevs": 4, 00:16:07.265 "num_base_bdevs_discovered": 4, 00:16:07.265 "num_base_bdevs_operational": 4, 00:16:07.265 "base_bdevs_list": [ 00:16:07.265 { 00:16:07.265 "name": "pt1", 00:16:07.265 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:07.265 "is_configured": true, 00:16:07.265 "data_offset": 2048, 00:16:07.265 "data_size": 63488 00:16:07.265 }, 00:16:07.265 { 00:16:07.265 "name": "pt2", 00:16:07.265 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:07.265 "is_configured": true, 00:16:07.265 "data_offset": 2048, 00:16:07.265 "data_size": 63488 00:16:07.265 }, 00:16:07.265 { 00:16:07.265 "name": "pt3", 00:16:07.265 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:07.265 "is_configured": true, 00:16:07.265 "data_offset": 2048, 00:16:07.265 "data_size": 63488 00:16:07.265 }, 00:16:07.265 { 00:16:07.265 "name": "pt4", 00:16:07.265 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:07.265 "is_configured": true, 00:16:07.265 "data_offset": 2048, 00:16:07.265 "data_size": 63488 00:16:07.265 } 00:16:07.265 ] 00:16:07.265 }' 00:16:07.265 06:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.265 06:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.523 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:07.523 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:07.523 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:07.523 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:07.523 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:07.523 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:07.523 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:07.523 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:07.523 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.523 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.523 [2024-12-06 06:42:26.107756] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:07.523 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.523 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:07.523 "name": "raid_bdev1", 00:16:07.523 "aliases": [ 00:16:07.523 "d583a639-25fe-4868-a3a9-61b905dfa6e6" 00:16:07.523 ], 00:16:07.523 "product_name": "Raid Volume", 00:16:07.523 "block_size": 512, 00:16:07.523 "num_blocks": 253952, 00:16:07.523 "uuid": "d583a639-25fe-4868-a3a9-61b905dfa6e6", 00:16:07.523 "assigned_rate_limits": { 00:16:07.523 "rw_ios_per_sec": 0, 00:16:07.523 "rw_mbytes_per_sec": 0, 00:16:07.523 "r_mbytes_per_sec": 0, 00:16:07.523 "w_mbytes_per_sec": 0 00:16:07.523 }, 00:16:07.523 "claimed": false, 00:16:07.523 "zoned": false, 00:16:07.523 "supported_io_types": { 00:16:07.523 "read": true, 00:16:07.523 "write": true, 00:16:07.523 "unmap": true, 00:16:07.523 "flush": true, 00:16:07.523 "reset": true, 00:16:07.523 "nvme_admin": false, 00:16:07.523 "nvme_io": false, 00:16:07.523 "nvme_io_md": false, 00:16:07.523 "write_zeroes": true, 00:16:07.523 "zcopy": false, 00:16:07.523 "get_zone_info": false, 00:16:07.523 "zone_management": false, 00:16:07.523 "zone_append": false, 00:16:07.523 "compare": false, 00:16:07.523 "compare_and_write": false, 00:16:07.523 "abort": false, 00:16:07.523 "seek_hole": false, 00:16:07.523 "seek_data": false, 00:16:07.523 "copy": false, 00:16:07.523 "nvme_iov_md": false 00:16:07.523 }, 00:16:07.523 "memory_domains": [ 00:16:07.523 { 00:16:07.523 "dma_device_id": "system", 00:16:07.523 "dma_device_type": 1 00:16:07.523 }, 00:16:07.523 { 00:16:07.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.523 "dma_device_type": 2 00:16:07.523 }, 00:16:07.523 { 00:16:07.523 "dma_device_id": "system", 00:16:07.523 "dma_device_type": 1 00:16:07.523 }, 00:16:07.523 { 00:16:07.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.523 "dma_device_type": 2 00:16:07.523 }, 00:16:07.523 { 00:16:07.523 "dma_device_id": "system", 00:16:07.523 "dma_device_type": 1 00:16:07.523 }, 00:16:07.523 { 00:16:07.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.523 "dma_device_type": 2 00:16:07.523 }, 00:16:07.524 { 00:16:07.524 "dma_device_id": "system", 00:16:07.524 "dma_device_type": 1 00:16:07.524 }, 00:16:07.524 { 00:16:07.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.524 "dma_device_type": 2 00:16:07.524 } 00:16:07.524 ], 00:16:07.524 "driver_specific": { 00:16:07.524 "raid": { 00:16:07.524 "uuid": "d583a639-25fe-4868-a3a9-61b905dfa6e6", 00:16:07.524 "strip_size_kb": 64, 00:16:07.524 "state": "online", 00:16:07.524 "raid_level": "concat", 00:16:07.524 "superblock": true, 00:16:07.524 "num_base_bdevs": 4, 00:16:07.524 "num_base_bdevs_discovered": 4, 00:16:07.524 "num_base_bdevs_operational": 4, 00:16:07.524 "base_bdevs_list": [ 00:16:07.524 { 00:16:07.524 "name": "pt1", 00:16:07.524 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:07.524 "is_configured": true, 00:16:07.524 "data_offset": 2048, 00:16:07.524 "data_size": 63488 00:16:07.524 }, 00:16:07.524 { 00:16:07.524 "name": "pt2", 00:16:07.524 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:07.524 "is_configured": true, 00:16:07.524 "data_offset": 2048, 00:16:07.524 "data_size": 63488 00:16:07.524 }, 00:16:07.524 { 00:16:07.524 "name": "pt3", 00:16:07.524 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:07.524 "is_configured": true, 00:16:07.524 "data_offset": 2048, 00:16:07.524 "data_size": 63488 00:16:07.524 }, 00:16:07.524 { 00:16:07.524 "name": "pt4", 00:16:07.524 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:07.524 "is_configured": true, 00:16:07.524 "data_offset": 2048, 00:16:07.524 "data_size": 63488 00:16:07.524 } 00:16:07.524 ] 00:16:07.524 } 00:16:07.524 } 00:16:07.524 }' 00:16:07.524 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:07.781 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:07.781 pt2 00:16:07.781 pt3 00:16:07.781 pt4' 00:16:07.781 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.782 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.040 [2024-12-06 06:42:26.463818] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d583a639-25fe-4868-a3a9-61b905dfa6e6 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d583a639-25fe-4868-a3a9-61b905dfa6e6 ']' 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.040 [2024-12-06 06:42:26.515432] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:08.040 [2024-12-06 06:42:26.515466] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:08.040 [2024-12-06 06:42:26.515592] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:08.040 [2024-12-06 06:42:26.515714] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:08.040 [2024-12-06 06:42:26.515753] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.040 [2024-12-06 06:42:26.667539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:08.040 [2024-12-06 06:42:26.670176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:08.040 [2024-12-06 06:42:26.670243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:08.040 [2024-12-06 06:42:26.670313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:08.040 [2024-12-06 06:42:26.670387] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:08.040 [2024-12-06 06:42:26.670479] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:08.040 [2024-12-06 06:42:26.670512] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:08.040 [2024-12-06 06:42:26.670555] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:08.040 [2024-12-06 06:42:26.670580] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:08.040 [2024-12-06 06:42:26.670596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:08.040 request: 00:16:08.040 { 00:16:08.040 "name": "raid_bdev1", 00:16:08.040 "raid_level": "concat", 00:16:08.040 "base_bdevs": [ 00:16:08.040 "malloc1", 00:16:08.040 "malloc2", 00:16:08.040 "malloc3", 00:16:08.040 "malloc4" 00:16:08.040 ], 00:16:08.040 "strip_size_kb": 64, 00:16:08.040 "superblock": false, 00:16:08.040 "method": "bdev_raid_create", 00:16:08.040 "req_id": 1 00:16:08.040 } 00:16:08.040 Got JSON-RPC error response 00:16:08.040 response: 00:16:08.040 { 00:16:08.040 "code": -17, 00:16:08.040 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:08.040 } 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.040 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:08.300 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.300 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:08.300 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:08.300 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:08.300 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.300 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.300 [2024-12-06 06:42:26.723474] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:08.300 [2024-12-06 06:42:26.723693] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.300 [2024-12-06 06:42:26.723851] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:08.300 [2024-12-06 06:42:26.724003] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.300 [2024-12-06 06:42:26.726934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.300 [2024-12-06 06:42:26.727133] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:08.300 [2024-12-06 06:42:26.727392] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:08.300 [2024-12-06 06:42:26.727597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:08.300 pt1 00:16:08.300 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.300 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:16:08.300 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.300 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:08.300 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:08.300 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.300 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:08.300 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.300 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.300 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.300 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.300 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.300 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.300 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.300 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.300 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.300 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.300 "name": "raid_bdev1", 00:16:08.300 "uuid": "d583a639-25fe-4868-a3a9-61b905dfa6e6", 00:16:08.300 "strip_size_kb": 64, 00:16:08.300 "state": "configuring", 00:16:08.300 "raid_level": "concat", 00:16:08.300 "superblock": true, 00:16:08.300 "num_base_bdevs": 4, 00:16:08.300 "num_base_bdevs_discovered": 1, 00:16:08.300 "num_base_bdevs_operational": 4, 00:16:08.300 "base_bdevs_list": [ 00:16:08.300 { 00:16:08.300 "name": "pt1", 00:16:08.300 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:08.300 "is_configured": true, 00:16:08.300 "data_offset": 2048, 00:16:08.300 "data_size": 63488 00:16:08.300 }, 00:16:08.300 { 00:16:08.300 "name": null, 00:16:08.300 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:08.300 "is_configured": false, 00:16:08.300 "data_offset": 2048, 00:16:08.300 "data_size": 63488 00:16:08.300 }, 00:16:08.300 { 00:16:08.300 "name": null, 00:16:08.300 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:08.300 "is_configured": false, 00:16:08.300 "data_offset": 2048, 00:16:08.300 "data_size": 63488 00:16:08.300 }, 00:16:08.300 { 00:16:08.300 "name": null, 00:16:08.300 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:08.300 "is_configured": false, 00:16:08.300 "data_offset": 2048, 00:16:08.300 "data_size": 63488 00:16:08.300 } 00:16:08.300 ] 00:16:08.300 }' 00:16:08.300 06:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.301 06:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.868 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:08.868 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:08.868 06:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.868 06:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.868 [2024-12-06 06:42:27.224131] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:08.868 [2024-12-06 06:42:27.224222] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.868 [2024-12-06 06:42:27.224252] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:08.868 [2024-12-06 06:42:27.224269] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.868 [2024-12-06 06:42:27.224848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.868 [2024-12-06 06:42:27.224887] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:08.868 [2024-12-06 06:42:27.225018] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:08.868 [2024-12-06 06:42:27.225076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:08.868 pt2 00:16:08.868 06:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.868 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:08.868 06:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.868 06:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.868 [2024-12-06 06:42:27.232114] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:08.868 06:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.868 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:16:08.868 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.868 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:08.868 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:08.868 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.868 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:08.868 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.868 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.868 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.868 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.868 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.868 06:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.868 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.868 06:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.868 06:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.868 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.868 "name": "raid_bdev1", 00:16:08.868 "uuid": "d583a639-25fe-4868-a3a9-61b905dfa6e6", 00:16:08.868 "strip_size_kb": 64, 00:16:08.868 "state": "configuring", 00:16:08.868 "raid_level": "concat", 00:16:08.868 "superblock": true, 00:16:08.868 "num_base_bdevs": 4, 00:16:08.868 "num_base_bdevs_discovered": 1, 00:16:08.868 "num_base_bdevs_operational": 4, 00:16:08.868 "base_bdevs_list": [ 00:16:08.868 { 00:16:08.868 "name": "pt1", 00:16:08.868 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:08.868 "is_configured": true, 00:16:08.868 "data_offset": 2048, 00:16:08.868 "data_size": 63488 00:16:08.868 }, 00:16:08.868 { 00:16:08.868 "name": null, 00:16:08.868 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:08.868 "is_configured": false, 00:16:08.868 "data_offset": 0, 00:16:08.868 "data_size": 63488 00:16:08.868 }, 00:16:08.868 { 00:16:08.868 "name": null, 00:16:08.868 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:08.868 "is_configured": false, 00:16:08.868 "data_offset": 2048, 00:16:08.868 "data_size": 63488 00:16:08.868 }, 00:16:08.868 { 00:16:08.868 "name": null, 00:16:08.868 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:08.868 "is_configured": false, 00:16:08.868 "data_offset": 2048, 00:16:08.868 "data_size": 63488 00:16:08.868 } 00:16:08.868 ] 00:16:08.868 }' 00:16:08.868 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.868 06:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.127 [2024-12-06 06:42:27.716269] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:09.127 [2024-12-06 06:42:27.716350] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.127 [2024-12-06 06:42:27.716383] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:09.127 [2024-12-06 06:42:27.716397] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.127 [2024-12-06 06:42:27.716978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.127 [2024-12-06 06:42:27.717004] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:09.127 [2024-12-06 06:42:27.717107] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:09.127 [2024-12-06 06:42:27.717139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:09.127 pt2 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.127 [2024-12-06 06:42:27.724222] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:09.127 [2024-12-06 06:42:27.724278] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.127 [2024-12-06 06:42:27.724305] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:09.127 [2024-12-06 06:42:27.724318] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.127 [2024-12-06 06:42:27.724785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.127 [2024-12-06 06:42:27.724816] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:09.127 [2024-12-06 06:42:27.724894] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:09.127 [2024-12-06 06:42:27.724928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:09.127 pt3 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.127 [2024-12-06 06:42:27.732200] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:09.127 [2024-12-06 06:42:27.732266] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.127 [2024-12-06 06:42:27.732292] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:09.127 [2024-12-06 06:42:27.732306] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.127 [2024-12-06 06:42:27.732797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.127 [2024-12-06 06:42:27.732891] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:09.127 [2024-12-06 06:42:27.732997] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:09.127 [2024-12-06 06:42:27.733046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:09.127 [2024-12-06 06:42:27.733235] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:09.127 [2024-12-06 06:42:27.733251] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:09.127 [2024-12-06 06:42:27.733619] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:09.127 [2024-12-06 06:42:27.733810] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:09.127 [2024-12-06 06:42:27.733834] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:09.127 [2024-12-06 06:42:27.734030] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.127 pt4 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.127 06:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.386 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.386 "name": "raid_bdev1", 00:16:09.386 "uuid": "d583a639-25fe-4868-a3a9-61b905dfa6e6", 00:16:09.386 "strip_size_kb": 64, 00:16:09.386 "state": "online", 00:16:09.386 "raid_level": "concat", 00:16:09.386 "superblock": true, 00:16:09.386 "num_base_bdevs": 4, 00:16:09.386 "num_base_bdevs_discovered": 4, 00:16:09.386 "num_base_bdevs_operational": 4, 00:16:09.386 "base_bdevs_list": [ 00:16:09.386 { 00:16:09.386 "name": "pt1", 00:16:09.386 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:09.386 "is_configured": true, 00:16:09.386 "data_offset": 2048, 00:16:09.386 "data_size": 63488 00:16:09.386 }, 00:16:09.386 { 00:16:09.386 "name": "pt2", 00:16:09.386 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:09.386 "is_configured": true, 00:16:09.386 "data_offset": 2048, 00:16:09.386 "data_size": 63488 00:16:09.386 }, 00:16:09.386 { 00:16:09.386 "name": "pt3", 00:16:09.386 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:09.386 "is_configured": true, 00:16:09.386 "data_offset": 2048, 00:16:09.386 "data_size": 63488 00:16:09.386 }, 00:16:09.386 { 00:16:09.386 "name": "pt4", 00:16:09.386 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:09.386 "is_configured": true, 00:16:09.386 "data_offset": 2048, 00:16:09.386 "data_size": 63488 00:16:09.386 } 00:16:09.386 ] 00:16:09.386 }' 00:16:09.386 06:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.386 06:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.681 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:09.681 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:09.681 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:09.681 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:09.681 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:09.681 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:09.681 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:09.681 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:09.681 06:42:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.681 06:42:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.681 [2024-12-06 06:42:28.228806] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:09.681 06:42:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.681 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:09.681 "name": "raid_bdev1", 00:16:09.681 "aliases": [ 00:16:09.681 "d583a639-25fe-4868-a3a9-61b905dfa6e6" 00:16:09.681 ], 00:16:09.682 "product_name": "Raid Volume", 00:16:09.682 "block_size": 512, 00:16:09.682 "num_blocks": 253952, 00:16:09.682 "uuid": "d583a639-25fe-4868-a3a9-61b905dfa6e6", 00:16:09.682 "assigned_rate_limits": { 00:16:09.682 "rw_ios_per_sec": 0, 00:16:09.682 "rw_mbytes_per_sec": 0, 00:16:09.682 "r_mbytes_per_sec": 0, 00:16:09.682 "w_mbytes_per_sec": 0 00:16:09.682 }, 00:16:09.682 "claimed": false, 00:16:09.682 "zoned": false, 00:16:09.682 "supported_io_types": { 00:16:09.682 "read": true, 00:16:09.682 "write": true, 00:16:09.682 "unmap": true, 00:16:09.682 "flush": true, 00:16:09.682 "reset": true, 00:16:09.682 "nvme_admin": false, 00:16:09.682 "nvme_io": false, 00:16:09.682 "nvme_io_md": false, 00:16:09.682 "write_zeroes": true, 00:16:09.682 "zcopy": false, 00:16:09.682 "get_zone_info": false, 00:16:09.682 "zone_management": false, 00:16:09.682 "zone_append": false, 00:16:09.682 "compare": false, 00:16:09.682 "compare_and_write": false, 00:16:09.682 "abort": false, 00:16:09.682 "seek_hole": false, 00:16:09.682 "seek_data": false, 00:16:09.682 "copy": false, 00:16:09.682 "nvme_iov_md": false 00:16:09.682 }, 00:16:09.682 "memory_domains": [ 00:16:09.682 { 00:16:09.682 "dma_device_id": "system", 00:16:09.682 "dma_device_type": 1 00:16:09.682 }, 00:16:09.682 { 00:16:09.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.682 "dma_device_type": 2 00:16:09.682 }, 00:16:09.682 { 00:16:09.682 "dma_device_id": "system", 00:16:09.682 "dma_device_type": 1 00:16:09.682 }, 00:16:09.682 { 00:16:09.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.682 "dma_device_type": 2 00:16:09.682 }, 00:16:09.682 { 00:16:09.682 "dma_device_id": "system", 00:16:09.682 "dma_device_type": 1 00:16:09.682 }, 00:16:09.682 { 00:16:09.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.682 "dma_device_type": 2 00:16:09.682 }, 00:16:09.682 { 00:16:09.682 "dma_device_id": "system", 00:16:09.682 "dma_device_type": 1 00:16:09.682 }, 00:16:09.682 { 00:16:09.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.682 "dma_device_type": 2 00:16:09.682 } 00:16:09.682 ], 00:16:09.682 "driver_specific": { 00:16:09.682 "raid": { 00:16:09.682 "uuid": "d583a639-25fe-4868-a3a9-61b905dfa6e6", 00:16:09.682 "strip_size_kb": 64, 00:16:09.682 "state": "online", 00:16:09.682 "raid_level": "concat", 00:16:09.682 "superblock": true, 00:16:09.682 "num_base_bdevs": 4, 00:16:09.682 "num_base_bdevs_discovered": 4, 00:16:09.682 "num_base_bdevs_operational": 4, 00:16:09.682 "base_bdevs_list": [ 00:16:09.682 { 00:16:09.682 "name": "pt1", 00:16:09.682 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:09.682 "is_configured": true, 00:16:09.682 "data_offset": 2048, 00:16:09.682 "data_size": 63488 00:16:09.682 }, 00:16:09.682 { 00:16:09.682 "name": "pt2", 00:16:09.682 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:09.682 "is_configured": true, 00:16:09.682 "data_offset": 2048, 00:16:09.682 "data_size": 63488 00:16:09.682 }, 00:16:09.682 { 00:16:09.682 "name": "pt3", 00:16:09.682 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:09.682 "is_configured": true, 00:16:09.682 "data_offset": 2048, 00:16:09.682 "data_size": 63488 00:16:09.682 }, 00:16:09.682 { 00:16:09.682 "name": "pt4", 00:16:09.682 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:09.682 "is_configured": true, 00:16:09.682 "data_offset": 2048, 00:16:09.682 "data_size": 63488 00:16:09.682 } 00:16:09.682 ] 00:16:09.682 } 00:16:09.682 } 00:16:09.682 }' 00:16:09.682 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:09.954 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:09.954 pt2 00:16:09.954 pt3 00:16:09.954 pt4' 00:16:09.954 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.954 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:09.954 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:09.954 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:09.954 06:42:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.955 06:42:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.955 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.955 06:42:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.955 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:09.955 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:09.955 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:09.955 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.955 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:09.955 06:42:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.955 06:42:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.955 06:42:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.955 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:09.955 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:09.955 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:09.955 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:09.955 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.955 06:42:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.955 06:42:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.955 06:42:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.955 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:09.955 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:09.955 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:09.955 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:09.955 06:42:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.955 06:42:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.955 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.955 06:42:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.213 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:10.213 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:10.213 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:10.213 06:42:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.213 06:42:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.213 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:10.213 [2024-12-06 06:42:28.612875] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:10.213 06:42:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.213 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d583a639-25fe-4868-a3a9-61b905dfa6e6 '!=' d583a639-25fe-4868-a3a9-61b905dfa6e6 ']' 00:16:10.213 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:16:10.213 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:10.213 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:10.213 06:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72943 00:16:10.213 06:42:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72943 ']' 00:16:10.213 06:42:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72943 00:16:10.213 06:42:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:10.213 06:42:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:10.213 06:42:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72943 00:16:10.213 killing process with pid 72943 00:16:10.213 06:42:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:10.213 06:42:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:10.213 06:42:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72943' 00:16:10.213 06:42:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72943 00:16:10.214 [2024-12-06 06:42:28.706544] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:10.214 06:42:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72943 00:16:10.214 [2024-12-06 06:42:28.706651] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:10.214 [2024-12-06 06:42:28.706771] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:10.214 [2024-12-06 06:42:28.706795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:10.472 [2024-12-06 06:42:29.066850] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:11.847 06:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:11.847 00:16:11.847 real 0m5.907s 00:16:11.847 user 0m8.857s 00:16:11.847 sys 0m0.861s 00:16:11.847 06:42:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:11.847 ************************************ 00:16:11.847 END TEST raid_superblock_test 00:16:11.847 ************************************ 00:16:11.847 06:42:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.847 06:42:30 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:16:11.847 06:42:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:11.847 06:42:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:11.847 06:42:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:11.847 ************************************ 00:16:11.847 START TEST raid_read_error_test 00:16:11.847 ************************************ 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pndqr9SArx 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73209 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73209 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73209 ']' 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:11.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:11.847 06:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.847 [2024-12-06 06:42:30.307862] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:16:11.847 [2024-12-06 06:42:30.308037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73209 ] 00:16:12.106 [2024-12-06 06:42:30.497668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.106 [2024-12-06 06:42:30.663695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.363 [2024-12-06 06:42:30.878161] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:12.363 [2024-12-06 06:42:30.878239] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.931 BaseBdev1_malloc 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.931 true 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.931 [2024-12-06 06:42:31.348921] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:12.931 [2024-12-06 06:42:31.348992] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:12.931 [2024-12-06 06:42:31.349022] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:12.931 [2024-12-06 06:42:31.349040] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:12.931 [2024-12-06 06:42:31.351867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:12.931 [2024-12-06 06:42:31.351920] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:12.931 BaseBdev1 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.931 BaseBdev2_malloc 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.931 true 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.931 [2024-12-06 06:42:31.409189] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:12.931 [2024-12-06 06:42:31.409263] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:12.931 [2024-12-06 06:42:31.409291] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:12.931 [2024-12-06 06:42:31.409308] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:12.931 [2024-12-06 06:42:31.412159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:12.931 [2024-12-06 06:42:31.412211] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:12.931 BaseBdev2 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.931 BaseBdev3_malloc 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.931 true 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.931 [2024-12-06 06:42:31.478194] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:12.931 [2024-12-06 06:42:31.478267] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:12.931 [2024-12-06 06:42:31.478295] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:12.931 [2024-12-06 06:42:31.478313] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:12.931 [2024-12-06 06:42:31.481234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:12.931 [2024-12-06 06:42:31.481430] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:12.931 BaseBdev3 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.931 BaseBdev4_malloc 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.931 true 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.931 [2024-12-06 06:42:31.534781] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:12.931 [2024-12-06 06:42:31.534854] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:12.931 [2024-12-06 06:42:31.534885] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:12.931 [2024-12-06 06:42:31.534904] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:12.931 [2024-12-06 06:42:31.537835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:12.931 [2024-12-06 06:42:31.537891] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:12.931 BaseBdev4 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.931 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.931 [2024-12-06 06:42:31.542864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:12.932 [2024-12-06 06:42:31.545288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:12.932 [2024-12-06 06:42:31.545567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:12.932 [2024-12-06 06:42:31.545704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:12.932 [2024-12-06 06:42:31.546039] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:12.932 [2024-12-06 06:42:31.546066] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:12.932 [2024-12-06 06:42:31.546388] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:16:12.932 [2024-12-06 06:42:31.546634] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:12.932 [2024-12-06 06:42:31.546654] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:12.932 [2024-12-06 06:42:31.546898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.932 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.932 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:12.932 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.932 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.932 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:12.932 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.932 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:12.932 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.932 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.932 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.932 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.932 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.932 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.932 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.932 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.932 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.190 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.190 "name": "raid_bdev1", 00:16:13.190 "uuid": "8e343591-8211-43f6-a97d-a1978d9d153e", 00:16:13.190 "strip_size_kb": 64, 00:16:13.190 "state": "online", 00:16:13.190 "raid_level": "concat", 00:16:13.190 "superblock": true, 00:16:13.190 "num_base_bdevs": 4, 00:16:13.190 "num_base_bdevs_discovered": 4, 00:16:13.190 "num_base_bdevs_operational": 4, 00:16:13.190 "base_bdevs_list": [ 00:16:13.190 { 00:16:13.190 "name": "BaseBdev1", 00:16:13.190 "uuid": "0989e99e-44f5-560d-aa42-22fc54845a0e", 00:16:13.190 "is_configured": true, 00:16:13.190 "data_offset": 2048, 00:16:13.190 "data_size": 63488 00:16:13.190 }, 00:16:13.190 { 00:16:13.190 "name": "BaseBdev2", 00:16:13.190 "uuid": "772b8f1e-d7b6-5818-95a6-76988f19fa88", 00:16:13.190 "is_configured": true, 00:16:13.190 "data_offset": 2048, 00:16:13.190 "data_size": 63488 00:16:13.190 }, 00:16:13.190 { 00:16:13.190 "name": "BaseBdev3", 00:16:13.190 "uuid": "1eaaf251-5870-513e-8128-16a5b6fa2d1d", 00:16:13.190 "is_configured": true, 00:16:13.190 "data_offset": 2048, 00:16:13.190 "data_size": 63488 00:16:13.190 }, 00:16:13.190 { 00:16:13.190 "name": "BaseBdev4", 00:16:13.190 "uuid": "2cf9415f-1755-516e-81fe-a0cd52f3d540", 00:16:13.190 "is_configured": true, 00:16:13.190 "data_offset": 2048, 00:16:13.190 "data_size": 63488 00:16:13.190 } 00:16:13.190 ] 00:16:13.190 }' 00:16:13.190 06:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.190 06:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.448 06:42:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:13.448 06:42:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:13.707 [2024-12-06 06:42:32.224458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:16:14.642 06:42:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:14.642 06:42:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.642 06:42:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.642 06:42:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.642 06:42:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:14.642 06:42:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:16:14.642 06:42:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:16:14.642 06:42:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:14.642 06:42:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.642 06:42:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.642 06:42:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:14.642 06:42:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.642 06:42:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:14.642 06:42:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.642 06:42:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.642 06:42:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.642 06:42:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.642 06:42:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.642 06:42:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.642 06:42:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.642 06:42:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.642 06:42:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.642 06:42:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.642 "name": "raid_bdev1", 00:16:14.642 "uuid": "8e343591-8211-43f6-a97d-a1978d9d153e", 00:16:14.642 "strip_size_kb": 64, 00:16:14.642 "state": "online", 00:16:14.642 "raid_level": "concat", 00:16:14.642 "superblock": true, 00:16:14.642 "num_base_bdevs": 4, 00:16:14.642 "num_base_bdevs_discovered": 4, 00:16:14.642 "num_base_bdevs_operational": 4, 00:16:14.642 "base_bdevs_list": [ 00:16:14.642 { 00:16:14.642 "name": "BaseBdev1", 00:16:14.642 "uuid": "0989e99e-44f5-560d-aa42-22fc54845a0e", 00:16:14.642 "is_configured": true, 00:16:14.642 "data_offset": 2048, 00:16:14.642 "data_size": 63488 00:16:14.642 }, 00:16:14.642 { 00:16:14.642 "name": "BaseBdev2", 00:16:14.642 "uuid": "772b8f1e-d7b6-5818-95a6-76988f19fa88", 00:16:14.642 "is_configured": true, 00:16:14.642 "data_offset": 2048, 00:16:14.642 "data_size": 63488 00:16:14.642 }, 00:16:14.642 { 00:16:14.642 "name": "BaseBdev3", 00:16:14.642 "uuid": "1eaaf251-5870-513e-8128-16a5b6fa2d1d", 00:16:14.642 "is_configured": true, 00:16:14.642 "data_offset": 2048, 00:16:14.642 "data_size": 63488 00:16:14.642 }, 00:16:14.642 { 00:16:14.642 "name": "BaseBdev4", 00:16:14.642 "uuid": "2cf9415f-1755-516e-81fe-a0cd52f3d540", 00:16:14.642 "is_configured": true, 00:16:14.642 "data_offset": 2048, 00:16:14.642 "data_size": 63488 00:16:14.642 } 00:16:14.642 ] 00:16:14.642 }' 00:16:14.642 06:42:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.643 06:42:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.207 06:42:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:15.207 06:42:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.207 06:42:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.207 [2024-12-06 06:42:33.640483] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:15.207 [2024-12-06 06:42:33.640541] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:15.207 [2024-12-06 06:42:33.643991] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:15.207 [2024-12-06 06:42:33.644071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.207 [2024-12-06 06:42:33.644133] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:15.207 [2024-12-06 06:42:33.644152] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:15.207 { 00:16:15.207 "results": [ 00:16:15.207 { 00:16:15.207 "job": "raid_bdev1", 00:16:15.207 "core_mask": "0x1", 00:16:15.207 "workload": "randrw", 00:16:15.207 "percentage": 50, 00:16:15.207 "status": "finished", 00:16:15.207 "queue_depth": 1, 00:16:15.207 "io_size": 131072, 00:16:15.207 "runtime": 1.413705, 00:16:15.207 "iops": 10119.508666942538, 00:16:15.207 "mibps": 1264.9385833678173, 00:16:15.207 "io_failed": 1, 00:16:15.207 "io_timeout": 0, 00:16:15.207 "avg_latency_us": 137.49612510087243, 00:16:15.207 "min_latency_us": 44.21818181818182, 00:16:15.207 "max_latency_us": 1861.8181818181818 00:16:15.207 } 00:16:15.207 ], 00:16:15.207 "core_count": 1 00:16:15.207 } 00:16:15.207 06:42:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.207 06:42:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73209 00:16:15.207 06:42:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73209 ']' 00:16:15.207 06:42:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73209 00:16:15.207 06:42:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:16:15.207 06:42:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:15.207 06:42:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73209 00:16:15.207 killing process with pid 73209 00:16:15.207 06:42:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:15.207 06:42:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:15.207 06:42:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73209' 00:16:15.207 06:42:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73209 00:16:15.207 06:42:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73209 00:16:15.207 [2024-12-06 06:42:33.681304] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:15.464 [2024-12-06 06:42:33.979720] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:16.837 06:42:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pndqr9SArx 00:16:16.837 06:42:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:16.837 06:42:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:16.837 ************************************ 00:16:16.837 END TEST raid_read_error_test 00:16:16.837 ************************************ 00:16:16.837 06:42:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:16:16.837 06:42:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:16:16.837 06:42:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:16.837 06:42:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:16.837 06:42:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:16:16.837 00:16:16.837 real 0m4.911s 00:16:16.837 user 0m6.097s 00:16:16.837 sys 0m0.591s 00:16:16.837 06:42:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:16.837 06:42:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.837 06:42:35 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:16:16.837 06:42:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:16.837 06:42:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:16.837 06:42:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:16.837 ************************************ 00:16:16.837 START TEST raid_write_error_test 00:16:16.837 ************************************ 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:16.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.DQ69ozNjKT 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73355 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73355 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73355 ']' 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:16.837 06:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.837 [2024-12-06 06:42:35.262679] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:16:16.837 [2024-12-06 06:42:35.262872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73355 ] 00:16:16.837 [2024-12-06 06:42:35.454412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.095 [2024-12-06 06:42:35.614026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.352 [2024-12-06 06:42:35.822064] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:17.352 [2024-12-06 06:42:35.822275] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 BaseBdev1_malloc 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 true 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 [2024-12-06 06:42:36.343639] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:17.919 [2024-12-06 06:42:36.343710] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.919 [2024-12-06 06:42:36.343741] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:17.919 [2024-12-06 06:42:36.343761] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.919 [2024-12-06 06:42:36.346539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.919 [2024-12-06 06:42:36.346592] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:17.919 BaseBdev1 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 BaseBdev2_malloc 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 true 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 [2024-12-06 06:42:36.399818] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:17.919 [2024-12-06 06:42:36.399922] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.919 [2024-12-06 06:42:36.399968] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:17.919 [2024-12-06 06:42:36.400002] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.919 [2024-12-06 06:42:36.403745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.919 [2024-12-06 06:42:36.403800] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:17.919 BaseBdev2 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 BaseBdev3_malloc 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 true 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 [2024-12-06 06:42:36.475401] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:17.919 [2024-12-06 06:42:36.475690] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.919 [2024-12-06 06:42:36.475745] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:17.919 [2024-12-06 06:42:36.475776] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.919 [2024-12-06 06:42:36.479497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.919 [2024-12-06 06:42:36.479586] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:17.919 BaseBdev3 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 BaseBdev4_malloc 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 true 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 [2024-12-06 06:42:36.551992] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:17.919 [2024-12-06 06:42:36.552068] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.919 [2024-12-06 06:42:36.552098] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:17.919 [2024-12-06 06:42:36.552117] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.919 [2024-12-06 06:42:36.554956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.919 [2024-12-06 06:42:36.555143] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:17.919 BaseBdev4 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.919 [2024-12-06 06:42:36.560155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.207 [2024-12-06 06:42:36.562628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:18.207 [2024-12-06 06:42:36.562747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:18.207 [2024-12-06 06:42:36.562848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:18.207 [2024-12-06 06:42:36.563159] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:18.207 [2024-12-06 06:42:36.563195] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:18.207 [2024-12-06 06:42:36.563512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:16:18.207 [2024-12-06 06:42:36.563757] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:18.207 [2024-12-06 06:42:36.563783] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:18.207 [2024-12-06 06:42:36.563986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.207 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.207 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:18.207 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.207 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.207 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:18.207 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.207 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.207 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.207 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.207 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.207 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.207 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.207 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.207 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.207 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.207 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.207 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.207 "name": "raid_bdev1", 00:16:18.207 "uuid": "f909e2d1-678b-40ca-9651-8efa4f51b0bb", 00:16:18.207 "strip_size_kb": 64, 00:16:18.207 "state": "online", 00:16:18.207 "raid_level": "concat", 00:16:18.207 "superblock": true, 00:16:18.207 "num_base_bdevs": 4, 00:16:18.207 "num_base_bdevs_discovered": 4, 00:16:18.207 "num_base_bdevs_operational": 4, 00:16:18.207 "base_bdevs_list": [ 00:16:18.207 { 00:16:18.207 "name": "BaseBdev1", 00:16:18.207 "uuid": "f7337df8-8938-56f1-8336-d10ecfecdc60", 00:16:18.207 "is_configured": true, 00:16:18.207 "data_offset": 2048, 00:16:18.207 "data_size": 63488 00:16:18.208 }, 00:16:18.208 { 00:16:18.208 "name": "BaseBdev2", 00:16:18.208 "uuid": "57b35940-150a-5f10-bfd7-e72f60b731a4", 00:16:18.208 "is_configured": true, 00:16:18.208 "data_offset": 2048, 00:16:18.208 "data_size": 63488 00:16:18.208 }, 00:16:18.208 { 00:16:18.208 "name": "BaseBdev3", 00:16:18.208 "uuid": "13567790-f45e-5728-9d98-b9aa4903c937", 00:16:18.208 "is_configured": true, 00:16:18.208 "data_offset": 2048, 00:16:18.208 "data_size": 63488 00:16:18.208 }, 00:16:18.208 { 00:16:18.208 "name": "BaseBdev4", 00:16:18.208 "uuid": "fe833a4c-2297-598c-94e3-c2e02d4be970", 00:16:18.208 "is_configured": true, 00:16:18.208 "data_offset": 2048, 00:16:18.208 "data_size": 63488 00:16:18.208 } 00:16:18.208 ] 00:16:18.208 }' 00:16:18.208 06:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.208 06:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.471 06:42:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:18.471 06:42:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:18.732 [2024-12-06 06:42:37.165761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:16:19.666 06:42:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:19.666 06:42:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.666 06:42:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.666 06:42:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.666 06:42:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:19.666 06:42:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:16:19.666 06:42:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:16:19.666 06:42:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:19.666 06:42:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.667 06:42:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.667 06:42:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:19.667 06:42:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.667 06:42:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:19.667 06:42:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.667 06:42:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.667 06:42:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.667 06:42:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.667 06:42:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.667 06:42:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.667 06:42:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.667 06:42:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.667 06:42:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.667 06:42:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.667 "name": "raid_bdev1", 00:16:19.667 "uuid": "f909e2d1-678b-40ca-9651-8efa4f51b0bb", 00:16:19.667 "strip_size_kb": 64, 00:16:19.667 "state": "online", 00:16:19.667 "raid_level": "concat", 00:16:19.667 "superblock": true, 00:16:19.667 "num_base_bdevs": 4, 00:16:19.667 "num_base_bdevs_discovered": 4, 00:16:19.667 "num_base_bdevs_operational": 4, 00:16:19.667 "base_bdevs_list": [ 00:16:19.667 { 00:16:19.667 "name": "BaseBdev1", 00:16:19.667 "uuid": "f7337df8-8938-56f1-8336-d10ecfecdc60", 00:16:19.667 "is_configured": true, 00:16:19.667 "data_offset": 2048, 00:16:19.667 "data_size": 63488 00:16:19.667 }, 00:16:19.667 { 00:16:19.667 "name": "BaseBdev2", 00:16:19.667 "uuid": "57b35940-150a-5f10-bfd7-e72f60b731a4", 00:16:19.667 "is_configured": true, 00:16:19.667 "data_offset": 2048, 00:16:19.667 "data_size": 63488 00:16:19.667 }, 00:16:19.667 { 00:16:19.667 "name": "BaseBdev3", 00:16:19.667 "uuid": "13567790-f45e-5728-9d98-b9aa4903c937", 00:16:19.667 "is_configured": true, 00:16:19.667 "data_offset": 2048, 00:16:19.667 "data_size": 63488 00:16:19.667 }, 00:16:19.667 { 00:16:19.667 "name": "BaseBdev4", 00:16:19.667 "uuid": "fe833a4c-2297-598c-94e3-c2e02d4be970", 00:16:19.667 "is_configured": true, 00:16:19.667 "data_offset": 2048, 00:16:19.667 "data_size": 63488 00:16:19.667 } 00:16:19.667 ] 00:16:19.667 }' 00:16:19.667 06:42:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.667 06:42:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.926 06:42:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:19.926 06:42:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.926 06:42:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.926 [2024-12-06 06:42:38.522963] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:19.926 [2024-12-06 06:42:38.523142] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:19.926 [2024-12-06 06:42:38.526716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.926 [2024-12-06 06:42:38.526925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.926 [2024-12-06 06:42:38.527116] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.926 [2024-12-06 06:42:38.527284] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:19.926 { 00:16:19.926 "results": [ 00:16:19.926 { 00:16:19.926 "job": "raid_bdev1", 00:16:19.926 "core_mask": "0x1", 00:16:19.926 "workload": "randrw", 00:16:19.926 "percentage": 50, 00:16:19.926 "status": "finished", 00:16:19.926 "queue_depth": 1, 00:16:19.926 "io_size": 131072, 00:16:19.926 "runtime": 1.354909, 00:16:19.926 "iops": 10030.193909701684, 00:16:19.926 "mibps": 1253.7742387127105, 00:16:19.926 "io_failed": 1, 00:16:19.926 "io_timeout": 0, 00:16:19.926 "avg_latency_us": 138.31380285081704, 00:16:19.926 "min_latency_us": 45.38181818181818, 00:16:19.926 "max_latency_us": 1832.0290909090909 00:16:19.926 } 00:16:19.926 ], 00:16:19.926 "core_count": 1 00:16:19.926 } 00:16:19.926 06:42:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.926 06:42:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73355 00:16:19.926 06:42:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73355 ']' 00:16:19.926 06:42:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73355 00:16:19.926 06:42:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:16:19.926 06:42:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:19.926 06:42:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73355 00:16:19.926 killing process with pid 73355 00:16:19.926 06:42:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:19.926 06:42:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:19.926 06:42:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73355' 00:16:19.926 06:42:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73355 00:16:19.926 06:42:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73355 00:16:19.926 [2024-12-06 06:42:38.558342] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:20.493 [2024-12-06 06:42:38.857133] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:21.426 06:42:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.DQ69ozNjKT 00:16:21.426 06:42:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:21.426 06:42:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:21.426 06:42:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:16:21.426 06:42:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:16:21.426 06:42:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:21.426 06:42:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:21.426 06:42:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:16:21.426 00:16:21.426 real 0m4.833s 00:16:21.426 user 0m5.898s 00:16:21.426 sys 0m0.612s 00:16:21.426 06:42:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:21.426 ************************************ 00:16:21.426 END TEST raid_write_error_test 00:16:21.426 ************************************ 00:16:21.426 06:42:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.426 06:42:40 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:16:21.426 06:42:40 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:16:21.426 06:42:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:21.426 06:42:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:21.426 06:42:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:21.426 ************************************ 00:16:21.426 START TEST raid_state_function_test 00:16:21.426 ************************************ 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:21.426 Process raid pid: 73504 00:16:21.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73504 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73504' 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73504 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73504 ']' 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:21.426 06:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.684 [2024-12-06 06:42:40.132720] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:16:21.684 [2024-12-06 06:42:40.132898] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.684 [2024-12-06 06:42:40.324844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.940 [2024-12-06 06:42:40.461633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.197 [2024-12-06 06:42:40.671346] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:22.197 [2024-12-06 06:42:40.671647] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:22.762 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:22.762 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:22.762 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:22.762 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.762 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.762 [2024-12-06 06:42:41.216304] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:22.762 [2024-12-06 06:42:41.216384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:22.762 [2024-12-06 06:42:41.216401] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:22.762 [2024-12-06 06:42:41.216418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:22.762 [2024-12-06 06:42:41.216429] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:22.762 [2024-12-06 06:42:41.216443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:22.762 [2024-12-06 06:42:41.216453] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:22.762 [2024-12-06 06:42:41.216467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:22.762 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.762 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:22.762 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.762 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.762 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:22.762 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:22.762 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:22.762 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.762 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.762 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.762 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.762 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.762 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.762 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.762 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.762 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.762 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.762 "name": "Existed_Raid", 00:16:22.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.762 "strip_size_kb": 0, 00:16:22.762 "state": "configuring", 00:16:22.762 "raid_level": "raid1", 00:16:22.762 "superblock": false, 00:16:22.762 "num_base_bdevs": 4, 00:16:22.762 "num_base_bdevs_discovered": 0, 00:16:22.762 "num_base_bdevs_operational": 4, 00:16:22.762 "base_bdevs_list": [ 00:16:22.762 { 00:16:22.762 "name": "BaseBdev1", 00:16:22.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.762 "is_configured": false, 00:16:22.762 "data_offset": 0, 00:16:22.762 "data_size": 0 00:16:22.762 }, 00:16:22.762 { 00:16:22.762 "name": "BaseBdev2", 00:16:22.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.762 "is_configured": false, 00:16:22.762 "data_offset": 0, 00:16:22.762 "data_size": 0 00:16:22.762 }, 00:16:22.762 { 00:16:22.762 "name": "BaseBdev3", 00:16:22.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.762 "is_configured": false, 00:16:22.762 "data_offset": 0, 00:16:22.762 "data_size": 0 00:16:22.762 }, 00:16:22.762 { 00:16:22.762 "name": "BaseBdev4", 00:16:22.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.762 "is_configured": false, 00:16:22.762 "data_offset": 0, 00:16:22.762 "data_size": 0 00:16:22.762 } 00:16:22.762 ] 00:16:22.762 }' 00:16:22.762 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.762 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.327 [2024-12-06 06:42:41.728408] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:23.327 [2024-12-06 06:42:41.728458] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.327 [2024-12-06 06:42:41.736356] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:23.327 [2024-12-06 06:42:41.736555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:23.327 [2024-12-06 06:42:41.736580] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:23.327 [2024-12-06 06:42:41.736599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:23.327 [2024-12-06 06:42:41.736612] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:23.327 [2024-12-06 06:42:41.736633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:23.327 [2024-12-06 06:42:41.736643] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:23.327 [2024-12-06 06:42:41.736657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.327 [2024-12-06 06:42:41.781976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:23.327 BaseBdev1 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.327 [ 00:16:23.327 { 00:16:23.327 "name": "BaseBdev1", 00:16:23.327 "aliases": [ 00:16:23.327 "5212d27e-786a-4c20-8d5f-ca74af2a27c9" 00:16:23.327 ], 00:16:23.327 "product_name": "Malloc disk", 00:16:23.327 "block_size": 512, 00:16:23.327 "num_blocks": 65536, 00:16:23.327 "uuid": "5212d27e-786a-4c20-8d5f-ca74af2a27c9", 00:16:23.327 "assigned_rate_limits": { 00:16:23.327 "rw_ios_per_sec": 0, 00:16:23.327 "rw_mbytes_per_sec": 0, 00:16:23.327 "r_mbytes_per_sec": 0, 00:16:23.327 "w_mbytes_per_sec": 0 00:16:23.327 }, 00:16:23.327 "claimed": true, 00:16:23.327 "claim_type": "exclusive_write", 00:16:23.327 "zoned": false, 00:16:23.327 "supported_io_types": { 00:16:23.327 "read": true, 00:16:23.327 "write": true, 00:16:23.327 "unmap": true, 00:16:23.327 "flush": true, 00:16:23.327 "reset": true, 00:16:23.327 "nvme_admin": false, 00:16:23.327 "nvme_io": false, 00:16:23.327 "nvme_io_md": false, 00:16:23.327 "write_zeroes": true, 00:16:23.327 "zcopy": true, 00:16:23.327 "get_zone_info": false, 00:16:23.327 "zone_management": false, 00:16:23.327 "zone_append": false, 00:16:23.327 "compare": false, 00:16:23.327 "compare_and_write": false, 00:16:23.327 "abort": true, 00:16:23.327 "seek_hole": false, 00:16:23.327 "seek_data": false, 00:16:23.327 "copy": true, 00:16:23.327 "nvme_iov_md": false 00:16:23.327 }, 00:16:23.327 "memory_domains": [ 00:16:23.327 { 00:16:23.327 "dma_device_id": "system", 00:16:23.327 "dma_device_type": 1 00:16:23.327 }, 00:16:23.327 { 00:16:23.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.327 "dma_device_type": 2 00:16:23.327 } 00:16:23.327 ], 00:16:23.327 "driver_specific": {} 00:16:23.327 } 00:16:23.327 ] 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.327 "name": "Existed_Raid", 00:16:23.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.327 "strip_size_kb": 0, 00:16:23.327 "state": "configuring", 00:16:23.327 "raid_level": "raid1", 00:16:23.327 "superblock": false, 00:16:23.327 "num_base_bdevs": 4, 00:16:23.327 "num_base_bdevs_discovered": 1, 00:16:23.327 "num_base_bdevs_operational": 4, 00:16:23.327 "base_bdevs_list": [ 00:16:23.327 { 00:16:23.327 "name": "BaseBdev1", 00:16:23.327 "uuid": "5212d27e-786a-4c20-8d5f-ca74af2a27c9", 00:16:23.327 "is_configured": true, 00:16:23.327 "data_offset": 0, 00:16:23.327 "data_size": 65536 00:16:23.327 }, 00:16:23.327 { 00:16:23.327 "name": "BaseBdev2", 00:16:23.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.327 "is_configured": false, 00:16:23.327 "data_offset": 0, 00:16:23.327 "data_size": 0 00:16:23.327 }, 00:16:23.327 { 00:16:23.327 "name": "BaseBdev3", 00:16:23.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.327 "is_configured": false, 00:16:23.327 "data_offset": 0, 00:16:23.327 "data_size": 0 00:16:23.327 }, 00:16:23.327 { 00:16:23.327 "name": "BaseBdev4", 00:16:23.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.327 "is_configured": false, 00:16:23.327 "data_offset": 0, 00:16:23.327 "data_size": 0 00:16:23.327 } 00:16:23.327 ] 00:16:23.327 }' 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.327 06:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.892 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:23.892 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.892 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.892 [2024-12-06 06:42:42.326222] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:23.892 [2024-12-06 06:42:42.326442] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:23.892 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.892 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:23.892 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.892 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.892 [2024-12-06 06:42:42.338258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:23.892 [2024-12-06 06:42:42.340739] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:23.892 [2024-12-06 06:42:42.340908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:23.892 [2024-12-06 06:42:42.341028] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:23.892 [2024-12-06 06:42:42.341157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:23.892 [2024-12-06 06:42:42.341270] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:23.892 [2024-12-06 06:42:42.341326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:23.892 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.892 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:23.892 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:23.892 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:23.892 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.892 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.892 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.892 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.892 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:23.892 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.892 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.892 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.892 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.892 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.892 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.893 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.893 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.893 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.893 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.893 "name": "Existed_Raid", 00:16:23.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.893 "strip_size_kb": 0, 00:16:23.893 "state": "configuring", 00:16:23.893 "raid_level": "raid1", 00:16:23.893 "superblock": false, 00:16:23.893 "num_base_bdevs": 4, 00:16:23.893 "num_base_bdevs_discovered": 1, 00:16:23.893 "num_base_bdevs_operational": 4, 00:16:23.893 "base_bdevs_list": [ 00:16:23.893 { 00:16:23.893 "name": "BaseBdev1", 00:16:23.893 "uuid": "5212d27e-786a-4c20-8d5f-ca74af2a27c9", 00:16:23.893 "is_configured": true, 00:16:23.893 "data_offset": 0, 00:16:23.893 "data_size": 65536 00:16:23.893 }, 00:16:23.893 { 00:16:23.893 "name": "BaseBdev2", 00:16:23.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.893 "is_configured": false, 00:16:23.893 "data_offset": 0, 00:16:23.893 "data_size": 0 00:16:23.893 }, 00:16:23.893 { 00:16:23.893 "name": "BaseBdev3", 00:16:23.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.893 "is_configured": false, 00:16:23.893 "data_offset": 0, 00:16:23.893 "data_size": 0 00:16:23.893 }, 00:16:23.893 { 00:16:23.893 "name": "BaseBdev4", 00:16:23.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.893 "is_configured": false, 00:16:23.893 "data_offset": 0, 00:16:23.893 "data_size": 0 00:16:23.893 } 00:16:23.893 ] 00:16:23.893 }' 00:16:23.893 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.893 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.458 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.459 [2024-12-06 06:42:42.866140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:24.459 BaseBdev2 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.459 [ 00:16:24.459 { 00:16:24.459 "name": "BaseBdev2", 00:16:24.459 "aliases": [ 00:16:24.459 "3e17ad3e-728b-44cb-b92d-8fbf703bb791" 00:16:24.459 ], 00:16:24.459 "product_name": "Malloc disk", 00:16:24.459 "block_size": 512, 00:16:24.459 "num_blocks": 65536, 00:16:24.459 "uuid": "3e17ad3e-728b-44cb-b92d-8fbf703bb791", 00:16:24.459 "assigned_rate_limits": { 00:16:24.459 "rw_ios_per_sec": 0, 00:16:24.459 "rw_mbytes_per_sec": 0, 00:16:24.459 "r_mbytes_per_sec": 0, 00:16:24.459 "w_mbytes_per_sec": 0 00:16:24.459 }, 00:16:24.459 "claimed": true, 00:16:24.459 "claim_type": "exclusive_write", 00:16:24.459 "zoned": false, 00:16:24.459 "supported_io_types": { 00:16:24.459 "read": true, 00:16:24.459 "write": true, 00:16:24.459 "unmap": true, 00:16:24.459 "flush": true, 00:16:24.459 "reset": true, 00:16:24.459 "nvme_admin": false, 00:16:24.459 "nvme_io": false, 00:16:24.459 "nvme_io_md": false, 00:16:24.459 "write_zeroes": true, 00:16:24.459 "zcopy": true, 00:16:24.459 "get_zone_info": false, 00:16:24.459 "zone_management": false, 00:16:24.459 "zone_append": false, 00:16:24.459 "compare": false, 00:16:24.459 "compare_and_write": false, 00:16:24.459 "abort": true, 00:16:24.459 "seek_hole": false, 00:16:24.459 "seek_data": false, 00:16:24.459 "copy": true, 00:16:24.459 "nvme_iov_md": false 00:16:24.459 }, 00:16:24.459 "memory_domains": [ 00:16:24.459 { 00:16:24.459 "dma_device_id": "system", 00:16:24.459 "dma_device_type": 1 00:16:24.459 }, 00:16:24.459 { 00:16:24.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.459 "dma_device_type": 2 00:16:24.459 } 00:16:24.459 ], 00:16:24.459 "driver_specific": {} 00:16:24.459 } 00:16:24.459 ] 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.459 "name": "Existed_Raid", 00:16:24.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.459 "strip_size_kb": 0, 00:16:24.459 "state": "configuring", 00:16:24.459 "raid_level": "raid1", 00:16:24.459 "superblock": false, 00:16:24.459 "num_base_bdevs": 4, 00:16:24.459 "num_base_bdevs_discovered": 2, 00:16:24.459 "num_base_bdevs_operational": 4, 00:16:24.459 "base_bdevs_list": [ 00:16:24.459 { 00:16:24.459 "name": "BaseBdev1", 00:16:24.459 "uuid": "5212d27e-786a-4c20-8d5f-ca74af2a27c9", 00:16:24.459 "is_configured": true, 00:16:24.459 "data_offset": 0, 00:16:24.459 "data_size": 65536 00:16:24.459 }, 00:16:24.459 { 00:16:24.459 "name": "BaseBdev2", 00:16:24.459 "uuid": "3e17ad3e-728b-44cb-b92d-8fbf703bb791", 00:16:24.459 "is_configured": true, 00:16:24.459 "data_offset": 0, 00:16:24.459 "data_size": 65536 00:16:24.459 }, 00:16:24.459 { 00:16:24.459 "name": "BaseBdev3", 00:16:24.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.459 "is_configured": false, 00:16:24.459 "data_offset": 0, 00:16:24.459 "data_size": 0 00:16:24.459 }, 00:16:24.459 { 00:16:24.459 "name": "BaseBdev4", 00:16:24.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.459 "is_configured": false, 00:16:24.459 "data_offset": 0, 00:16:24.459 "data_size": 0 00:16:24.459 } 00:16:24.459 ] 00:16:24.459 }' 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.459 06:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.027 06:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:25.027 06:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.027 06:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.027 [2024-12-06 06:42:43.453411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:25.027 BaseBdev3 00:16:25.027 06:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.027 06:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:25.027 06:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:25.027 06:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:25.027 06:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:25.027 06:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:25.027 06:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:25.027 06:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:25.027 06:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.027 06:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.027 06:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.027 06:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:25.027 06:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.027 06:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.027 [ 00:16:25.027 { 00:16:25.027 "name": "BaseBdev3", 00:16:25.027 "aliases": [ 00:16:25.027 "2385dc69-2534-4822-a2c1-15f95407140f" 00:16:25.027 ], 00:16:25.027 "product_name": "Malloc disk", 00:16:25.027 "block_size": 512, 00:16:25.027 "num_blocks": 65536, 00:16:25.027 "uuid": "2385dc69-2534-4822-a2c1-15f95407140f", 00:16:25.027 "assigned_rate_limits": { 00:16:25.027 "rw_ios_per_sec": 0, 00:16:25.027 "rw_mbytes_per_sec": 0, 00:16:25.027 "r_mbytes_per_sec": 0, 00:16:25.027 "w_mbytes_per_sec": 0 00:16:25.027 }, 00:16:25.027 "claimed": true, 00:16:25.027 "claim_type": "exclusive_write", 00:16:25.027 "zoned": false, 00:16:25.027 "supported_io_types": { 00:16:25.027 "read": true, 00:16:25.027 "write": true, 00:16:25.027 "unmap": true, 00:16:25.027 "flush": true, 00:16:25.027 "reset": true, 00:16:25.027 "nvme_admin": false, 00:16:25.027 "nvme_io": false, 00:16:25.027 "nvme_io_md": false, 00:16:25.027 "write_zeroes": true, 00:16:25.027 "zcopy": true, 00:16:25.027 "get_zone_info": false, 00:16:25.027 "zone_management": false, 00:16:25.027 "zone_append": false, 00:16:25.027 "compare": false, 00:16:25.027 "compare_and_write": false, 00:16:25.027 "abort": true, 00:16:25.027 "seek_hole": false, 00:16:25.027 "seek_data": false, 00:16:25.027 "copy": true, 00:16:25.027 "nvme_iov_md": false 00:16:25.027 }, 00:16:25.027 "memory_domains": [ 00:16:25.027 { 00:16:25.027 "dma_device_id": "system", 00:16:25.027 "dma_device_type": 1 00:16:25.027 }, 00:16:25.027 { 00:16:25.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.027 "dma_device_type": 2 00:16:25.027 } 00:16:25.027 ], 00:16:25.027 "driver_specific": {} 00:16:25.027 } 00:16:25.027 ] 00:16:25.027 06:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.027 06:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:25.027 06:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:25.027 06:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:25.027 06:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:25.027 06:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.027 06:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.027 06:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.028 06:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.028 06:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:25.028 06:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.028 06:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.028 06:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.028 06:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.028 06:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.028 06:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.028 06:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.028 06:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.028 06:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.028 06:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.028 "name": "Existed_Raid", 00:16:25.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.028 "strip_size_kb": 0, 00:16:25.028 "state": "configuring", 00:16:25.028 "raid_level": "raid1", 00:16:25.028 "superblock": false, 00:16:25.028 "num_base_bdevs": 4, 00:16:25.028 "num_base_bdevs_discovered": 3, 00:16:25.028 "num_base_bdevs_operational": 4, 00:16:25.028 "base_bdevs_list": [ 00:16:25.028 { 00:16:25.028 "name": "BaseBdev1", 00:16:25.028 "uuid": "5212d27e-786a-4c20-8d5f-ca74af2a27c9", 00:16:25.028 "is_configured": true, 00:16:25.028 "data_offset": 0, 00:16:25.028 "data_size": 65536 00:16:25.028 }, 00:16:25.028 { 00:16:25.028 "name": "BaseBdev2", 00:16:25.028 "uuid": "3e17ad3e-728b-44cb-b92d-8fbf703bb791", 00:16:25.028 "is_configured": true, 00:16:25.028 "data_offset": 0, 00:16:25.028 "data_size": 65536 00:16:25.028 }, 00:16:25.028 { 00:16:25.028 "name": "BaseBdev3", 00:16:25.028 "uuid": "2385dc69-2534-4822-a2c1-15f95407140f", 00:16:25.028 "is_configured": true, 00:16:25.028 "data_offset": 0, 00:16:25.028 "data_size": 65536 00:16:25.028 }, 00:16:25.028 { 00:16:25.028 "name": "BaseBdev4", 00:16:25.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.028 "is_configured": false, 00:16:25.028 "data_offset": 0, 00:16:25.028 "data_size": 0 00:16:25.028 } 00:16:25.028 ] 00:16:25.028 }' 00:16:25.028 06:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.028 06:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.595 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.596 [2024-12-06 06:42:44.055167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:25.596 [2024-12-06 06:42:44.055235] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:25.596 [2024-12-06 06:42:44.055249] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:25.596 [2024-12-06 06:42:44.055669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:25.596 [2024-12-06 06:42:44.055885] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:25.596 [2024-12-06 06:42:44.055907] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:25.596 [2024-12-06 06:42:44.056234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.596 BaseBdev4 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.596 [ 00:16:25.596 { 00:16:25.596 "name": "BaseBdev4", 00:16:25.596 "aliases": [ 00:16:25.596 "ea1e69c4-5e7e-46e0-b4ea-1a9223a7988d" 00:16:25.596 ], 00:16:25.596 "product_name": "Malloc disk", 00:16:25.596 "block_size": 512, 00:16:25.596 "num_blocks": 65536, 00:16:25.596 "uuid": "ea1e69c4-5e7e-46e0-b4ea-1a9223a7988d", 00:16:25.596 "assigned_rate_limits": { 00:16:25.596 "rw_ios_per_sec": 0, 00:16:25.596 "rw_mbytes_per_sec": 0, 00:16:25.596 "r_mbytes_per_sec": 0, 00:16:25.596 "w_mbytes_per_sec": 0 00:16:25.596 }, 00:16:25.596 "claimed": true, 00:16:25.596 "claim_type": "exclusive_write", 00:16:25.596 "zoned": false, 00:16:25.596 "supported_io_types": { 00:16:25.596 "read": true, 00:16:25.596 "write": true, 00:16:25.596 "unmap": true, 00:16:25.596 "flush": true, 00:16:25.596 "reset": true, 00:16:25.596 "nvme_admin": false, 00:16:25.596 "nvme_io": false, 00:16:25.596 "nvme_io_md": false, 00:16:25.596 "write_zeroes": true, 00:16:25.596 "zcopy": true, 00:16:25.596 "get_zone_info": false, 00:16:25.596 "zone_management": false, 00:16:25.596 "zone_append": false, 00:16:25.596 "compare": false, 00:16:25.596 "compare_and_write": false, 00:16:25.596 "abort": true, 00:16:25.596 "seek_hole": false, 00:16:25.596 "seek_data": false, 00:16:25.596 "copy": true, 00:16:25.596 "nvme_iov_md": false 00:16:25.596 }, 00:16:25.596 "memory_domains": [ 00:16:25.596 { 00:16:25.596 "dma_device_id": "system", 00:16:25.596 "dma_device_type": 1 00:16:25.596 }, 00:16:25.596 { 00:16:25.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.596 "dma_device_type": 2 00:16:25.596 } 00:16:25.596 ], 00:16:25.596 "driver_specific": {} 00:16:25.596 } 00:16:25.596 ] 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.596 "name": "Existed_Raid", 00:16:25.596 "uuid": "e56469d6-73af-4a52-873c-54bfd9f24bc9", 00:16:25.596 "strip_size_kb": 0, 00:16:25.596 "state": "online", 00:16:25.596 "raid_level": "raid1", 00:16:25.596 "superblock": false, 00:16:25.596 "num_base_bdevs": 4, 00:16:25.596 "num_base_bdevs_discovered": 4, 00:16:25.596 "num_base_bdevs_operational": 4, 00:16:25.596 "base_bdevs_list": [ 00:16:25.596 { 00:16:25.596 "name": "BaseBdev1", 00:16:25.596 "uuid": "5212d27e-786a-4c20-8d5f-ca74af2a27c9", 00:16:25.596 "is_configured": true, 00:16:25.596 "data_offset": 0, 00:16:25.596 "data_size": 65536 00:16:25.596 }, 00:16:25.596 { 00:16:25.596 "name": "BaseBdev2", 00:16:25.596 "uuid": "3e17ad3e-728b-44cb-b92d-8fbf703bb791", 00:16:25.596 "is_configured": true, 00:16:25.596 "data_offset": 0, 00:16:25.596 "data_size": 65536 00:16:25.596 }, 00:16:25.596 { 00:16:25.596 "name": "BaseBdev3", 00:16:25.596 "uuid": "2385dc69-2534-4822-a2c1-15f95407140f", 00:16:25.596 "is_configured": true, 00:16:25.596 "data_offset": 0, 00:16:25.596 "data_size": 65536 00:16:25.596 }, 00:16:25.596 { 00:16:25.596 "name": "BaseBdev4", 00:16:25.596 "uuid": "ea1e69c4-5e7e-46e0-b4ea-1a9223a7988d", 00:16:25.596 "is_configured": true, 00:16:25.596 "data_offset": 0, 00:16:25.596 "data_size": 65536 00:16:25.596 } 00:16:25.596 ] 00:16:25.596 }' 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.596 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.164 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:26.164 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:26.164 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:26.164 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:26.164 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:26.164 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:26.164 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:26.164 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.164 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.164 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:26.164 [2024-12-06 06:42:44.631832] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:26.164 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.164 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:26.164 "name": "Existed_Raid", 00:16:26.164 "aliases": [ 00:16:26.164 "e56469d6-73af-4a52-873c-54bfd9f24bc9" 00:16:26.164 ], 00:16:26.164 "product_name": "Raid Volume", 00:16:26.164 "block_size": 512, 00:16:26.164 "num_blocks": 65536, 00:16:26.164 "uuid": "e56469d6-73af-4a52-873c-54bfd9f24bc9", 00:16:26.164 "assigned_rate_limits": { 00:16:26.164 "rw_ios_per_sec": 0, 00:16:26.165 "rw_mbytes_per_sec": 0, 00:16:26.165 "r_mbytes_per_sec": 0, 00:16:26.165 "w_mbytes_per_sec": 0 00:16:26.165 }, 00:16:26.165 "claimed": false, 00:16:26.165 "zoned": false, 00:16:26.165 "supported_io_types": { 00:16:26.165 "read": true, 00:16:26.165 "write": true, 00:16:26.165 "unmap": false, 00:16:26.165 "flush": false, 00:16:26.165 "reset": true, 00:16:26.165 "nvme_admin": false, 00:16:26.165 "nvme_io": false, 00:16:26.165 "nvme_io_md": false, 00:16:26.165 "write_zeroes": true, 00:16:26.165 "zcopy": false, 00:16:26.165 "get_zone_info": false, 00:16:26.165 "zone_management": false, 00:16:26.165 "zone_append": false, 00:16:26.165 "compare": false, 00:16:26.165 "compare_and_write": false, 00:16:26.165 "abort": false, 00:16:26.165 "seek_hole": false, 00:16:26.165 "seek_data": false, 00:16:26.165 "copy": false, 00:16:26.165 "nvme_iov_md": false 00:16:26.165 }, 00:16:26.165 "memory_domains": [ 00:16:26.165 { 00:16:26.165 "dma_device_id": "system", 00:16:26.165 "dma_device_type": 1 00:16:26.165 }, 00:16:26.165 { 00:16:26.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.165 "dma_device_type": 2 00:16:26.165 }, 00:16:26.165 { 00:16:26.165 "dma_device_id": "system", 00:16:26.165 "dma_device_type": 1 00:16:26.165 }, 00:16:26.165 { 00:16:26.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.165 "dma_device_type": 2 00:16:26.165 }, 00:16:26.165 { 00:16:26.165 "dma_device_id": "system", 00:16:26.165 "dma_device_type": 1 00:16:26.165 }, 00:16:26.165 { 00:16:26.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.165 "dma_device_type": 2 00:16:26.165 }, 00:16:26.165 { 00:16:26.165 "dma_device_id": "system", 00:16:26.165 "dma_device_type": 1 00:16:26.165 }, 00:16:26.165 { 00:16:26.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.165 "dma_device_type": 2 00:16:26.165 } 00:16:26.165 ], 00:16:26.165 "driver_specific": { 00:16:26.165 "raid": { 00:16:26.165 "uuid": "e56469d6-73af-4a52-873c-54bfd9f24bc9", 00:16:26.165 "strip_size_kb": 0, 00:16:26.165 "state": "online", 00:16:26.165 "raid_level": "raid1", 00:16:26.165 "superblock": false, 00:16:26.165 "num_base_bdevs": 4, 00:16:26.165 "num_base_bdevs_discovered": 4, 00:16:26.165 "num_base_bdevs_operational": 4, 00:16:26.165 "base_bdevs_list": [ 00:16:26.165 { 00:16:26.165 "name": "BaseBdev1", 00:16:26.165 "uuid": "5212d27e-786a-4c20-8d5f-ca74af2a27c9", 00:16:26.165 "is_configured": true, 00:16:26.165 "data_offset": 0, 00:16:26.165 "data_size": 65536 00:16:26.165 }, 00:16:26.165 { 00:16:26.165 "name": "BaseBdev2", 00:16:26.165 "uuid": "3e17ad3e-728b-44cb-b92d-8fbf703bb791", 00:16:26.165 "is_configured": true, 00:16:26.165 "data_offset": 0, 00:16:26.165 "data_size": 65536 00:16:26.165 }, 00:16:26.165 { 00:16:26.165 "name": "BaseBdev3", 00:16:26.165 "uuid": "2385dc69-2534-4822-a2c1-15f95407140f", 00:16:26.165 "is_configured": true, 00:16:26.165 "data_offset": 0, 00:16:26.165 "data_size": 65536 00:16:26.165 }, 00:16:26.165 { 00:16:26.165 "name": "BaseBdev4", 00:16:26.165 "uuid": "ea1e69c4-5e7e-46e0-b4ea-1a9223a7988d", 00:16:26.165 "is_configured": true, 00:16:26.165 "data_offset": 0, 00:16:26.165 "data_size": 65536 00:16:26.165 } 00:16:26.165 ] 00:16:26.165 } 00:16:26.165 } 00:16:26.165 }' 00:16:26.165 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:26.165 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:26.165 BaseBdev2 00:16:26.165 BaseBdev3 00:16:26.165 BaseBdev4' 00:16:26.165 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.165 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:26.165 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.165 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:26.165 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.165 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.165 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.165 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.425 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:26.425 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:26.425 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.425 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.425 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:26.425 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.425 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.425 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.425 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:26.425 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:26.425 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.425 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:26.425 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.425 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.425 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.425 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.425 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:26.425 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:26.425 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.425 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:26.425 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.425 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.425 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.425 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.425 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:26.425 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:26.425 06:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:26.425 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.425 06:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.425 [2024-12-06 06:42:44.995608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:26.683 06:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.683 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:26.683 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:26.683 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:26.683 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:26.683 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:26.683 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:26.683 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.683 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.683 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.683 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.683 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:26.683 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.683 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.683 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.683 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.683 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.683 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.683 06:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.683 06:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.683 06:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.683 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.683 "name": "Existed_Raid", 00:16:26.683 "uuid": "e56469d6-73af-4a52-873c-54bfd9f24bc9", 00:16:26.683 "strip_size_kb": 0, 00:16:26.683 "state": "online", 00:16:26.683 "raid_level": "raid1", 00:16:26.683 "superblock": false, 00:16:26.683 "num_base_bdevs": 4, 00:16:26.683 "num_base_bdevs_discovered": 3, 00:16:26.683 "num_base_bdevs_operational": 3, 00:16:26.683 "base_bdevs_list": [ 00:16:26.683 { 00:16:26.683 "name": null, 00:16:26.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.683 "is_configured": false, 00:16:26.683 "data_offset": 0, 00:16:26.683 "data_size": 65536 00:16:26.683 }, 00:16:26.683 { 00:16:26.683 "name": "BaseBdev2", 00:16:26.683 "uuid": "3e17ad3e-728b-44cb-b92d-8fbf703bb791", 00:16:26.684 "is_configured": true, 00:16:26.684 "data_offset": 0, 00:16:26.684 "data_size": 65536 00:16:26.684 }, 00:16:26.684 { 00:16:26.684 "name": "BaseBdev3", 00:16:26.684 "uuid": "2385dc69-2534-4822-a2c1-15f95407140f", 00:16:26.684 "is_configured": true, 00:16:26.684 "data_offset": 0, 00:16:26.684 "data_size": 65536 00:16:26.684 }, 00:16:26.684 { 00:16:26.684 "name": "BaseBdev4", 00:16:26.684 "uuid": "ea1e69c4-5e7e-46e0-b4ea-1a9223a7988d", 00:16:26.684 "is_configured": true, 00:16:26.684 "data_offset": 0, 00:16:26.684 "data_size": 65536 00:16:26.684 } 00:16:26.684 ] 00:16:26.684 }' 00:16:26.684 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.684 06:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.250 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:27.250 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:27.250 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.250 06:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.250 06:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.250 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:27.250 06:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.250 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:27.250 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:27.250 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:27.251 06:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.251 06:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.251 [2024-12-06 06:42:45.703352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:27.251 06:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.251 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:27.251 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:27.251 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:27.251 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.251 06:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.251 06:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.251 06:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.251 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:27.251 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:27.251 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:27.251 06:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.251 06:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.251 [2024-12-06 06:42:45.859300] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:27.509 06:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.509 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:27.509 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:27.509 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:27.509 06:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.509 06:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.509 06:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.509 06:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.509 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:27.509 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:27.509 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:27.509 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.509 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.509 [2024-12-06 06:42:46.008000] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:27.509 [2024-12-06 06:42:46.008134] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:27.509 [2024-12-06 06:42:46.097246] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.509 [2024-12-06 06:42:46.097320] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.509 [2024-12-06 06:42:46.097341] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:27.509 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.509 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:27.509 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:27.509 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.509 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:27.509 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.509 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.509 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.509 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:27.509 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:27.509 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:27.509 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:27.509 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:27.509 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:27.509 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.509 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.767 BaseBdev2 00:16:27.767 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.767 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:27.767 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:27.767 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:27.767 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:27.767 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:27.767 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:27.767 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:27.767 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.767 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.768 [ 00:16:27.768 { 00:16:27.768 "name": "BaseBdev2", 00:16:27.768 "aliases": [ 00:16:27.768 "5e13ffb0-e25d-485f-adf0-ef5fb8338216" 00:16:27.768 ], 00:16:27.768 "product_name": "Malloc disk", 00:16:27.768 "block_size": 512, 00:16:27.768 "num_blocks": 65536, 00:16:27.768 "uuid": "5e13ffb0-e25d-485f-adf0-ef5fb8338216", 00:16:27.768 "assigned_rate_limits": { 00:16:27.768 "rw_ios_per_sec": 0, 00:16:27.768 "rw_mbytes_per_sec": 0, 00:16:27.768 "r_mbytes_per_sec": 0, 00:16:27.768 "w_mbytes_per_sec": 0 00:16:27.768 }, 00:16:27.768 "claimed": false, 00:16:27.768 "zoned": false, 00:16:27.768 "supported_io_types": { 00:16:27.768 "read": true, 00:16:27.768 "write": true, 00:16:27.768 "unmap": true, 00:16:27.768 "flush": true, 00:16:27.768 "reset": true, 00:16:27.768 "nvme_admin": false, 00:16:27.768 "nvme_io": false, 00:16:27.768 "nvme_io_md": false, 00:16:27.768 "write_zeroes": true, 00:16:27.768 "zcopy": true, 00:16:27.768 "get_zone_info": false, 00:16:27.768 "zone_management": false, 00:16:27.768 "zone_append": false, 00:16:27.768 "compare": false, 00:16:27.768 "compare_and_write": false, 00:16:27.768 "abort": true, 00:16:27.768 "seek_hole": false, 00:16:27.768 "seek_data": false, 00:16:27.768 "copy": true, 00:16:27.768 "nvme_iov_md": false 00:16:27.768 }, 00:16:27.768 "memory_domains": [ 00:16:27.768 { 00:16:27.768 "dma_device_id": "system", 00:16:27.768 "dma_device_type": 1 00:16:27.768 }, 00:16:27.768 { 00:16:27.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.768 "dma_device_type": 2 00:16:27.768 } 00:16:27.768 ], 00:16:27.768 "driver_specific": {} 00:16:27.768 } 00:16:27.768 ] 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.768 BaseBdev3 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.768 [ 00:16:27.768 { 00:16:27.768 "name": "BaseBdev3", 00:16:27.768 "aliases": [ 00:16:27.768 "4cbe8b17-fc22-4496-a185-cd360eb9a905" 00:16:27.768 ], 00:16:27.768 "product_name": "Malloc disk", 00:16:27.768 "block_size": 512, 00:16:27.768 "num_blocks": 65536, 00:16:27.768 "uuid": "4cbe8b17-fc22-4496-a185-cd360eb9a905", 00:16:27.768 "assigned_rate_limits": { 00:16:27.768 "rw_ios_per_sec": 0, 00:16:27.768 "rw_mbytes_per_sec": 0, 00:16:27.768 "r_mbytes_per_sec": 0, 00:16:27.768 "w_mbytes_per_sec": 0 00:16:27.768 }, 00:16:27.768 "claimed": false, 00:16:27.768 "zoned": false, 00:16:27.768 "supported_io_types": { 00:16:27.768 "read": true, 00:16:27.768 "write": true, 00:16:27.768 "unmap": true, 00:16:27.768 "flush": true, 00:16:27.768 "reset": true, 00:16:27.768 "nvme_admin": false, 00:16:27.768 "nvme_io": false, 00:16:27.768 "nvme_io_md": false, 00:16:27.768 "write_zeroes": true, 00:16:27.768 "zcopy": true, 00:16:27.768 "get_zone_info": false, 00:16:27.768 "zone_management": false, 00:16:27.768 "zone_append": false, 00:16:27.768 "compare": false, 00:16:27.768 "compare_and_write": false, 00:16:27.768 "abort": true, 00:16:27.768 "seek_hole": false, 00:16:27.768 "seek_data": false, 00:16:27.768 "copy": true, 00:16:27.768 "nvme_iov_md": false 00:16:27.768 }, 00:16:27.768 "memory_domains": [ 00:16:27.768 { 00:16:27.768 "dma_device_id": "system", 00:16:27.768 "dma_device_type": 1 00:16:27.768 }, 00:16:27.768 { 00:16:27.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.768 "dma_device_type": 2 00:16:27.768 } 00:16:27.768 ], 00:16:27.768 "driver_specific": {} 00:16:27.768 } 00:16:27.768 ] 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.768 BaseBdev4 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.768 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.768 [ 00:16:27.768 { 00:16:27.768 "name": "BaseBdev4", 00:16:27.768 "aliases": [ 00:16:27.768 "fa14cbdc-bb92-4f50-8215-3adf6b81891f" 00:16:27.768 ], 00:16:27.768 "product_name": "Malloc disk", 00:16:27.768 "block_size": 512, 00:16:27.768 "num_blocks": 65536, 00:16:27.768 "uuid": "fa14cbdc-bb92-4f50-8215-3adf6b81891f", 00:16:27.768 "assigned_rate_limits": { 00:16:27.768 "rw_ios_per_sec": 0, 00:16:27.768 "rw_mbytes_per_sec": 0, 00:16:27.768 "r_mbytes_per_sec": 0, 00:16:27.768 "w_mbytes_per_sec": 0 00:16:27.768 }, 00:16:27.768 "claimed": false, 00:16:27.768 "zoned": false, 00:16:27.768 "supported_io_types": { 00:16:27.768 "read": true, 00:16:27.768 "write": true, 00:16:27.768 "unmap": true, 00:16:27.768 "flush": true, 00:16:27.768 "reset": true, 00:16:27.768 "nvme_admin": false, 00:16:27.768 "nvme_io": false, 00:16:27.768 "nvme_io_md": false, 00:16:27.768 "write_zeroes": true, 00:16:27.768 "zcopy": true, 00:16:27.768 "get_zone_info": false, 00:16:27.768 "zone_management": false, 00:16:27.768 "zone_append": false, 00:16:27.768 "compare": false, 00:16:27.768 "compare_and_write": false, 00:16:27.768 "abort": true, 00:16:27.768 "seek_hole": false, 00:16:27.768 "seek_data": false, 00:16:27.768 "copy": true, 00:16:27.768 "nvme_iov_md": false 00:16:27.768 }, 00:16:27.768 "memory_domains": [ 00:16:27.768 { 00:16:27.768 "dma_device_id": "system", 00:16:27.768 "dma_device_type": 1 00:16:27.768 }, 00:16:27.768 { 00:16:27.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.768 "dma_device_type": 2 00:16:27.769 } 00:16:27.769 ], 00:16:27.769 "driver_specific": {} 00:16:27.769 } 00:16:27.769 ] 00:16:27.769 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.769 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:27.769 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:27.769 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:27.769 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:27.769 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.769 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.769 [2024-12-06 06:42:46.401315] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:27.769 [2024-12-06 06:42:46.401377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:27.769 [2024-12-06 06:42:46.401407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:27.769 [2024-12-06 06:42:46.403866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:27.769 [2024-12-06 06:42:46.403932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:27.769 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.769 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:27.769 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.769 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:27.769 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:27.769 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:27.769 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:27.769 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.769 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.769 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.769 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.769 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.769 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.769 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.769 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.028 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.028 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.028 "name": "Existed_Raid", 00:16:28.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.028 "strip_size_kb": 0, 00:16:28.028 "state": "configuring", 00:16:28.028 "raid_level": "raid1", 00:16:28.028 "superblock": false, 00:16:28.028 "num_base_bdevs": 4, 00:16:28.028 "num_base_bdevs_discovered": 3, 00:16:28.028 "num_base_bdevs_operational": 4, 00:16:28.028 "base_bdevs_list": [ 00:16:28.028 { 00:16:28.028 "name": "BaseBdev1", 00:16:28.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.028 "is_configured": false, 00:16:28.028 "data_offset": 0, 00:16:28.028 "data_size": 0 00:16:28.028 }, 00:16:28.028 { 00:16:28.028 "name": "BaseBdev2", 00:16:28.028 "uuid": "5e13ffb0-e25d-485f-adf0-ef5fb8338216", 00:16:28.028 "is_configured": true, 00:16:28.028 "data_offset": 0, 00:16:28.028 "data_size": 65536 00:16:28.028 }, 00:16:28.028 { 00:16:28.028 "name": "BaseBdev3", 00:16:28.028 "uuid": "4cbe8b17-fc22-4496-a185-cd360eb9a905", 00:16:28.028 "is_configured": true, 00:16:28.028 "data_offset": 0, 00:16:28.028 "data_size": 65536 00:16:28.028 }, 00:16:28.028 { 00:16:28.028 "name": "BaseBdev4", 00:16:28.028 "uuid": "fa14cbdc-bb92-4f50-8215-3adf6b81891f", 00:16:28.028 "is_configured": true, 00:16:28.028 "data_offset": 0, 00:16:28.028 "data_size": 65536 00:16:28.028 } 00:16:28.028 ] 00:16:28.028 }' 00:16:28.028 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.028 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.287 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:28.287 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.287 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.287 [2024-12-06 06:42:46.909490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:28.287 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.287 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:28.287 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.287 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.287 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:28.287 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:28.287 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:28.287 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.287 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.287 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.287 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.287 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.287 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.287 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.287 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.546 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.546 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.546 "name": "Existed_Raid", 00:16:28.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.546 "strip_size_kb": 0, 00:16:28.546 "state": "configuring", 00:16:28.546 "raid_level": "raid1", 00:16:28.546 "superblock": false, 00:16:28.546 "num_base_bdevs": 4, 00:16:28.546 "num_base_bdevs_discovered": 2, 00:16:28.546 "num_base_bdevs_operational": 4, 00:16:28.546 "base_bdevs_list": [ 00:16:28.546 { 00:16:28.546 "name": "BaseBdev1", 00:16:28.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.546 "is_configured": false, 00:16:28.546 "data_offset": 0, 00:16:28.546 "data_size": 0 00:16:28.546 }, 00:16:28.546 { 00:16:28.546 "name": null, 00:16:28.546 "uuid": "5e13ffb0-e25d-485f-adf0-ef5fb8338216", 00:16:28.546 "is_configured": false, 00:16:28.546 "data_offset": 0, 00:16:28.546 "data_size": 65536 00:16:28.546 }, 00:16:28.546 { 00:16:28.546 "name": "BaseBdev3", 00:16:28.546 "uuid": "4cbe8b17-fc22-4496-a185-cd360eb9a905", 00:16:28.546 "is_configured": true, 00:16:28.546 "data_offset": 0, 00:16:28.546 "data_size": 65536 00:16:28.546 }, 00:16:28.546 { 00:16:28.546 "name": "BaseBdev4", 00:16:28.546 "uuid": "fa14cbdc-bb92-4f50-8215-3adf6b81891f", 00:16:28.546 "is_configured": true, 00:16:28.546 "data_offset": 0, 00:16:28.546 "data_size": 65536 00:16:28.546 } 00:16:28.546 ] 00:16:28.546 }' 00:16:28.546 06:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.546 06:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.805 06:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.805 06:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:28.805 06:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.805 06:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.064 06:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.064 06:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:29.064 06:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:29.064 06:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.064 06:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.064 [2024-12-06 06:42:47.535232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:29.064 BaseBdev1 00:16:29.064 06:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.064 06:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:29.064 06:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:29.064 06:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:29.064 06:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:29.064 06:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:29.064 06:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:29.064 06:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:29.064 06:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.064 06:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.064 06:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.064 06:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:29.064 06:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.064 06:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.064 [ 00:16:29.064 { 00:16:29.064 "name": "BaseBdev1", 00:16:29.064 "aliases": [ 00:16:29.064 "ea51b56e-bed2-40e0-a4e8-e0507f5272f6" 00:16:29.064 ], 00:16:29.064 "product_name": "Malloc disk", 00:16:29.064 "block_size": 512, 00:16:29.064 "num_blocks": 65536, 00:16:29.064 "uuid": "ea51b56e-bed2-40e0-a4e8-e0507f5272f6", 00:16:29.064 "assigned_rate_limits": { 00:16:29.064 "rw_ios_per_sec": 0, 00:16:29.064 "rw_mbytes_per_sec": 0, 00:16:29.064 "r_mbytes_per_sec": 0, 00:16:29.064 "w_mbytes_per_sec": 0 00:16:29.064 }, 00:16:29.064 "claimed": true, 00:16:29.064 "claim_type": "exclusive_write", 00:16:29.064 "zoned": false, 00:16:29.064 "supported_io_types": { 00:16:29.064 "read": true, 00:16:29.064 "write": true, 00:16:29.064 "unmap": true, 00:16:29.064 "flush": true, 00:16:29.064 "reset": true, 00:16:29.064 "nvme_admin": false, 00:16:29.064 "nvme_io": false, 00:16:29.064 "nvme_io_md": false, 00:16:29.064 "write_zeroes": true, 00:16:29.064 "zcopy": true, 00:16:29.064 "get_zone_info": false, 00:16:29.064 "zone_management": false, 00:16:29.064 "zone_append": false, 00:16:29.064 "compare": false, 00:16:29.064 "compare_and_write": false, 00:16:29.064 "abort": true, 00:16:29.064 "seek_hole": false, 00:16:29.064 "seek_data": false, 00:16:29.064 "copy": true, 00:16:29.064 "nvme_iov_md": false 00:16:29.064 }, 00:16:29.065 "memory_domains": [ 00:16:29.065 { 00:16:29.065 "dma_device_id": "system", 00:16:29.065 "dma_device_type": 1 00:16:29.065 }, 00:16:29.065 { 00:16:29.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.065 "dma_device_type": 2 00:16:29.065 } 00:16:29.065 ], 00:16:29.065 "driver_specific": {} 00:16:29.065 } 00:16:29.065 ] 00:16:29.065 06:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.065 06:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:29.065 06:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:29.065 06:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.065 06:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.065 06:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:29.065 06:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:29.065 06:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.065 06:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.065 06:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.065 06:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.065 06:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.065 06:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.065 06:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.065 06:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.065 06:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.065 06:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.065 06:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.065 "name": "Existed_Raid", 00:16:29.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.065 "strip_size_kb": 0, 00:16:29.065 "state": "configuring", 00:16:29.065 "raid_level": "raid1", 00:16:29.065 "superblock": false, 00:16:29.065 "num_base_bdevs": 4, 00:16:29.065 "num_base_bdevs_discovered": 3, 00:16:29.065 "num_base_bdevs_operational": 4, 00:16:29.065 "base_bdevs_list": [ 00:16:29.065 { 00:16:29.065 "name": "BaseBdev1", 00:16:29.065 "uuid": "ea51b56e-bed2-40e0-a4e8-e0507f5272f6", 00:16:29.065 "is_configured": true, 00:16:29.065 "data_offset": 0, 00:16:29.065 "data_size": 65536 00:16:29.065 }, 00:16:29.065 { 00:16:29.065 "name": null, 00:16:29.065 "uuid": "5e13ffb0-e25d-485f-adf0-ef5fb8338216", 00:16:29.065 "is_configured": false, 00:16:29.065 "data_offset": 0, 00:16:29.065 "data_size": 65536 00:16:29.065 }, 00:16:29.065 { 00:16:29.065 "name": "BaseBdev3", 00:16:29.065 "uuid": "4cbe8b17-fc22-4496-a185-cd360eb9a905", 00:16:29.065 "is_configured": true, 00:16:29.065 "data_offset": 0, 00:16:29.065 "data_size": 65536 00:16:29.065 }, 00:16:29.065 { 00:16:29.065 "name": "BaseBdev4", 00:16:29.065 "uuid": "fa14cbdc-bb92-4f50-8215-3adf6b81891f", 00:16:29.065 "is_configured": true, 00:16:29.065 "data_offset": 0, 00:16:29.065 "data_size": 65536 00:16:29.065 } 00:16:29.065 ] 00:16:29.065 }' 00:16:29.065 06:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.065 06:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.633 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:29.633 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.633 06:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.633 06:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.633 06:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.633 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:29.633 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:29.633 06:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.633 06:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.633 [2024-12-06 06:42:48.099494] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:29.633 06:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.633 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:29.633 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.633 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.633 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:29.633 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:29.633 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.633 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.633 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.633 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.633 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.633 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.633 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.633 06:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.633 06:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.633 06:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.633 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.633 "name": "Existed_Raid", 00:16:29.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.633 "strip_size_kb": 0, 00:16:29.633 "state": "configuring", 00:16:29.633 "raid_level": "raid1", 00:16:29.633 "superblock": false, 00:16:29.633 "num_base_bdevs": 4, 00:16:29.633 "num_base_bdevs_discovered": 2, 00:16:29.633 "num_base_bdevs_operational": 4, 00:16:29.633 "base_bdevs_list": [ 00:16:29.633 { 00:16:29.633 "name": "BaseBdev1", 00:16:29.633 "uuid": "ea51b56e-bed2-40e0-a4e8-e0507f5272f6", 00:16:29.633 "is_configured": true, 00:16:29.633 "data_offset": 0, 00:16:29.633 "data_size": 65536 00:16:29.633 }, 00:16:29.633 { 00:16:29.633 "name": null, 00:16:29.633 "uuid": "5e13ffb0-e25d-485f-adf0-ef5fb8338216", 00:16:29.633 "is_configured": false, 00:16:29.633 "data_offset": 0, 00:16:29.633 "data_size": 65536 00:16:29.633 }, 00:16:29.633 { 00:16:29.633 "name": null, 00:16:29.633 "uuid": "4cbe8b17-fc22-4496-a185-cd360eb9a905", 00:16:29.633 "is_configured": false, 00:16:29.633 "data_offset": 0, 00:16:29.633 "data_size": 65536 00:16:29.633 }, 00:16:29.633 { 00:16:29.633 "name": "BaseBdev4", 00:16:29.633 "uuid": "fa14cbdc-bb92-4f50-8215-3adf6b81891f", 00:16:29.633 "is_configured": true, 00:16:29.633 "data_offset": 0, 00:16:29.633 "data_size": 65536 00:16:29.634 } 00:16:29.634 ] 00:16:29.634 }' 00:16:29.634 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.634 06:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.201 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.201 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:30.201 06:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.201 06:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.201 06:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.201 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:30.201 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:30.201 06:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.201 06:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.201 [2024-12-06 06:42:48.631608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:30.201 06:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.201 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:30.201 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.201 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.201 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.201 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.201 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:30.201 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.201 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.201 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.201 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.201 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.201 06:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.201 06:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.201 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.201 06:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.201 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.201 "name": "Existed_Raid", 00:16:30.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.201 "strip_size_kb": 0, 00:16:30.201 "state": "configuring", 00:16:30.201 "raid_level": "raid1", 00:16:30.201 "superblock": false, 00:16:30.201 "num_base_bdevs": 4, 00:16:30.201 "num_base_bdevs_discovered": 3, 00:16:30.201 "num_base_bdevs_operational": 4, 00:16:30.201 "base_bdevs_list": [ 00:16:30.201 { 00:16:30.201 "name": "BaseBdev1", 00:16:30.201 "uuid": "ea51b56e-bed2-40e0-a4e8-e0507f5272f6", 00:16:30.201 "is_configured": true, 00:16:30.201 "data_offset": 0, 00:16:30.201 "data_size": 65536 00:16:30.201 }, 00:16:30.201 { 00:16:30.201 "name": null, 00:16:30.201 "uuid": "5e13ffb0-e25d-485f-adf0-ef5fb8338216", 00:16:30.201 "is_configured": false, 00:16:30.201 "data_offset": 0, 00:16:30.201 "data_size": 65536 00:16:30.201 }, 00:16:30.201 { 00:16:30.201 "name": "BaseBdev3", 00:16:30.201 "uuid": "4cbe8b17-fc22-4496-a185-cd360eb9a905", 00:16:30.201 "is_configured": true, 00:16:30.201 "data_offset": 0, 00:16:30.201 "data_size": 65536 00:16:30.201 }, 00:16:30.201 { 00:16:30.201 "name": "BaseBdev4", 00:16:30.201 "uuid": "fa14cbdc-bb92-4f50-8215-3adf6b81891f", 00:16:30.201 "is_configured": true, 00:16:30.201 "data_offset": 0, 00:16:30.201 "data_size": 65536 00:16:30.201 } 00:16:30.201 ] 00:16:30.201 }' 00:16:30.201 06:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.201 06:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.768 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.768 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:30.768 06:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.768 06:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.768 06:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.768 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:30.768 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:30.768 06:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.768 06:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.768 [2024-12-06 06:42:49.216259] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:30.768 06:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.768 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:30.768 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.768 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.768 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.768 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.768 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:30.768 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.768 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.768 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.768 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.768 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.768 06:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.768 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.768 06:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.768 06:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.768 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.768 "name": "Existed_Raid", 00:16:30.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.768 "strip_size_kb": 0, 00:16:30.768 "state": "configuring", 00:16:30.768 "raid_level": "raid1", 00:16:30.768 "superblock": false, 00:16:30.768 "num_base_bdevs": 4, 00:16:30.768 "num_base_bdevs_discovered": 2, 00:16:30.768 "num_base_bdevs_operational": 4, 00:16:30.768 "base_bdevs_list": [ 00:16:30.768 { 00:16:30.768 "name": null, 00:16:30.768 "uuid": "ea51b56e-bed2-40e0-a4e8-e0507f5272f6", 00:16:30.768 "is_configured": false, 00:16:30.768 "data_offset": 0, 00:16:30.768 "data_size": 65536 00:16:30.768 }, 00:16:30.768 { 00:16:30.768 "name": null, 00:16:30.768 "uuid": "5e13ffb0-e25d-485f-adf0-ef5fb8338216", 00:16:30.768 "is_configured": false, 00:16:30.768 "data_offset": 0, 00:16:30.768 "data_size": 65536 00:16:30.768 }, 00:16:30.768 { 00:16:30.768 "name": "BaseBdev3", 00:16:30.768 "uuid": "4cbe8b17-fc22-4496-a185-cd360eb9a905", 00:16:30.768 "is_configured": true, 00:16:30.769 "data_offset": 0, 00:16:30.769 "data_size": 65536 00:16:30.769 }, 00:16:30.769 { 00:16:30.769 "name": "BaseBdev4", 00:16:30.769 "uuid": "fa14cbdc-bb92-4f50-8215-3adf6b81891f", 00:16:30.769 "is_configured": true, 00:16:30.769 "data_offset": 0, 00:16:30.769 "data_size": 65536 00:16:30.769 } 00:16:30.769 ] 00:16:30.769 }' 00:16:30.769 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.769 06:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.336 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:31.336 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.336 06:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.336 06:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.336 06:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.336 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:31.336 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:31.336 06:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.336 06:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.336 [2024-12-06 06:42:49.865488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:31.336 06:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.336 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:31.336 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.336 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.336 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.336 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.336 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:31.336 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.336 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.336 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.336 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.336 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.336 06:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.336 06:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.336 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.336 06:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.336 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.336 "name": "Existed_Raid", 00:16:31.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.336 "strip_size_kb": 0, 00:16:31.336 "state": "configuring", 00:16:31.336 "raid_level": "raid1", 00:16:31.336 "superblock": false, 00:16:31.336 "num_base_bdevs": 4, 00:16:31.336 "num_base_bdevs_discovered": 3, 00:16:31.336 "num_base_bdevs_operational": 4, 00:16:31.336 "base_bdevs_list": [ 00:16:31.336 { 00:16:31.336 "name": null, 00:16:31.336 "uuid": "ea51b56e-bed2-40e0-a4e8-e0507f5272f6", 00:16:31.336 "is_configured": false, 00:16:31.336 "data_offset": 0, 00:16:31.336 "data_size": 65536 00:16:31.336 }, 00:16:31.336 { 00:16:31.336 "name": "BaseBdev2", 00:16:31.336 "uuid": "5e13ffb0-e25d-485f-adf0-ef5fb8338216", 00:16:31.336 "is_configured": true, 00:16:31.336 "data_offset": 0, 00:16:31.336 "data_size": 65536 00:16:31.336 }, 00:16:31.336 { 00:16:31.336 "name": "BaseBdev3", 00:16:31.336 "uuid": "4cbe8b17-fc22-4496-a185-cd360eb9a905", 00:16:31.336 "is_configured": true, 00:16:31.336 "data_offset": 0, 00:16:31.336 "data_size": 65536 00:16:31.336 }, 00:16:31.336 { 00:16:31.336 "name": "BaseBdev4", 00:16:31.336 "uuid": "fa14cbdc-bb92-4f50-8215-3adf6b81891f", 00:16:31.336 "is_configured": true, 00:16:31.336 "data_offset": 0, 00:16:31.336 "data_size": 65536 00:16:31.336 } 00:16:31.336 ] 00:16:31.336 }' 00:16:31.336 06:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.336 06:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.903 06:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.903 06:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.903 06:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.903 06:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:31.903 06:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.903 06:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:31.903 06:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.903 06:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.903 06:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:31.903 06:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.903 06:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.903 06:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ea51b56e-bed2-40e0-a4e8-e0507f5272f6 00:16:31.903 06:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.903 06:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.903 [2024-12-06 06:42:50.519494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:31.903 [2024-12-06 06:42:50.519579] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:31.903 [2024-12-06 06:42:50.519596] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:31.903 [2024-12-06 06:42:50.519932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:31.903 [2024-12-06 06:42:50.520139] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:31.903 [2024-12-06 06:42:50.520156] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:31.903 [2024-12-06 06:42:50.520461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.903 NewBaseBdev 00:16:31.903 06:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.903 06:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:31.903 06:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:31.903 06:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:31.903 06:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:31.903 06:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:31.903 06:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:31.903 06:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:31.903 06:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.903 06:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.903 06:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.903 06:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:31.903 06:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.903 06:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.903 [ 00:16:31.903 { 00:16:31.903 "name": "NewBaseBdev", 00:16:31.903 "aliases": [ 00:16:31.903 "ea51b56e-bed2-40e0-a4e8-e0507f5272f6" 00:16:31.903 ], 00:16:31.903 "product_name": "Malloc disk", 00:16:31.903 "block_size": 512, 00:16:31.903 "num_blocks": 65536, 00:16:31.903 "uuid": "ea51b56e-bed2-40e0-a4e8-e0507f5272f6", 00:16:31.903 "assigned_rate_limits": { 00:16:31.903 "rw_ios_per_sec": 0, 00:16:31.903 "rw_mbytes_per_sec": 0, 00:16:31.903 "r_mbytes_per_sec": 0, 00:16:31.903 "w_mbytes_per_sec": 0 00:16:31.903 }, 00:16:31.903 "claimed": true, 00:16:31.904 "claim_type": "exclusive_write", 00:16:31.904 "zoned": false, 00:16:31.904 "supported_io_types": { 00:16:31.904 "read": true, 00:16:31.904 "write": true, 00:16:31.904 "unmap": true, 00:16:31.904 "flush": true, 00:16:31.904 "reset": true, 00:16:31.904 "nvme_admin": false, 00:16:31.904 "nvme_io": false, 00:16:31.904 "nvme_io_md": false, 00:16:31.904 "write_zeroes": true, 00:16:31.904 "zcopy": true, 00:16:31.904 "get_zone_info": false, 00:16:31.904 "zone_management": false, 00:16:31.904 "zone_append": false, 00:16:31.904 "compare": false, 00:16:32.162 "compare_and_write": false, 00:16:32.162 "abort": true, 00:16:32.162 "seek_hole": false, 00:16:32.162 "seek_data": false, 00:16:32.162 "copy": true, 00:16:32.162 "nvme_iov_md": false 00:16:32.162 }, 00:16:32.162 "memory_domains": [ 00:16:32.162 { 00:16:32.162 "dma_device_id": "system", 00:16:32.162 "dma_device_type": 1 00:16:32.162 }, 00:16:32.162 { 00:16:32.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.162 "dma_device_type": 2 00:16:32.162 } 00:16:32.162 ], 00:16:32.162 "driver_specific": {} 00:16:32.162 } 00:16:32.162 ] 00:16:32.162 06:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.162 06:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:32.162 06:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:32.162 06:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.162 06:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.162 06:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.162 06:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.162 06:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:32.162 06:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.162 06:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.162 06:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.162 06:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.162 06:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.162 06:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.162 06:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.162 06:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.162 06:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.162 06:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.162 "name": "Existed_Raid", 00:16:32.162 "uuid": "565fbc4f-febf-4fcf-ad03-df377476a933", 00:16:32.162 "strip_size_kb": 0, 00:16:32.162 "state": "online", 00:16:32.162 "raid_level": "raid1", 00:16:32.162 "superblock": false, 00:16:32.162 "num_base_bdevs": 4, 00:16:32.162 "num_base_bdevs_discovered": 4, 00:16:32.162 "num_base_bdevs_operational": 4, 00:16:32.162 "base_bdevs_list": [ 00:16:32.162 { 00:16:32.163 "name": "NewBaseBdev", 00:16:32.163 "uuid": "ea51b56e-bed2-40e0-a4e8-e0507f5272f6", 00:16:32.163 "is_configured": true, 00:16:32.163 "data_offset": 0, 00:16:32.163 "data_size": 65536 00:16:32.163 }, 00:16:32.163 { 00:16:32.163 "name": "BaseBdev2", 00:16:32.163 "uuid": "5e13ffb0-e25d-485f-adf0-ef5fb8338216", 00:16:32.163 "is_configured": true, 00:16:32.163 "data_offset": 0, 00:16:32.163 "data_size": 65536 00:16:32.163 }, 00:16:32.163 { 00:16:32.163 "name": "BaseBdev3", 00:16:32.163 "uuid": "4cbe8b17-fc22-4496-a185-cd360eb9a905", 00:16:32.163 "is_configured": true, 00:16:32.163 "data_offset": 0, 00:16:32.163 "data_size": 65536 00:16:32.163 }, 00:16:32.163 { 00:16:32.163 "name": "BaseBdev4", 00:16:32.163 "uuid": "fa14cbdc-bb92-4f50-8215-3adf6b81891f", 00:16:32.163 "is_configured": true, 00:16:32.163 "data_offset": 0, 00:16:32.163 "data_size": 65536 00:16:32.163 } 00:16:32.163 ] 00:16:32.163 }' 00:16:32.163 06:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.163 06:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.421 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:32.421 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:32.421 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:32.421 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:32.421 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:32.421 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:32.421 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:32.421 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:32.421 06:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.421 06:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.421 [2024-12-06 06:42:51.052111] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:32.679 06:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.679 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:32.679 "name": "Existed_Raid", 00:16:32.679 "aliases": [ 00:16:32.679 "565fbc4f-febf-4fcf-ad03-df377476a933" 00:16:32.679 ], 00:16:32.679 "product_name": "Raid Volume", 00:16:32.679 "block_size": 512, 00:16:32.679 "num_blocks": 65536, 00:16:32.679 "uuid": "565fbc4f-febf-4fcf-ad03-df377476a933", 00:16:32.679 "assigned_rate_limits": { 00:16:32.679 "rw_ios_per_sec": 0, 00:16:32.679 "rw_mbytes_per_sec": 0, 00:16:32.679 "r_mbytes_per_sec": 0, 00:16:32.679 "w_mbytes_per_sec": 0 00:16:32.679 }, 00:16:32.679 "claimed": false, 00:16:32.679 "zoned": false, 00:16:32.679 "supported_io_types": { 00:16:32.679 "read": true, 00:16:32.679 "write": true, 00:16:32.679 "unmap": false, 00:16:32.679 "flush": false, 00:16:32.679 "reset": true, 00:16:32.679 "nvme_admin": false, 00:16:32.679 "nvme_io": false, 00:16:32.679 "nvme_io_md": false, 00:16:32.679 "write_zeroes": true, 00:16:32.679 "zcopy": false, 00:16:32.679 "get_zone_info": false, 00:16:32.679 "zone_management": false, 00:16:32.680 "zone_append": false, 00:16:32.680 "compare": false, 00:16:32.680 "compare_and_write": false, 00:16:32.680 "abort": false, 00:16:32.680 "seek_hole": false, 00:16:32.680 "seek_data": false, 00:16:32.680 "copy": false, 00:16:32.680 "nvme_iov_md": false 00:16:32.680 }, 00:16:32.680 "memory_domains": [ 00:16:32.680 { 00:16:32.680 "dma_device_id": "system", 00:16:32.680 "dma_device_type": 1 00:16:32.680 }, 00:16:32.680 { 00:16:32.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.680 "dma_device_type": 2 00:16:32.680 }, 00:16:32.680 { 00:16:32.680 "dma_device_id": "system", 00:16:32.680 "dma_device_type": 1 00:16:32.680 }, 00:16:32.680 { 00:16:32.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.680 "dma_device_type": 2 00:16:32.680 }, 00:16:32.680 { 00:16:32.680 "dma_device_id": "system", 00:16:32.680 "dma_device_type": 1 00:16:32.680 }, 00:16:32.680 { 00:16:32.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.680 "dma_device_type": 2 00:16:32.680 }, 00:16:32.680 { 00:16:32.680 "dma_device_id": "system", 00:16:32.680 "dma_device_type": 1 00:16:32.680 }, 00:16:32.680 { 00:16:32.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.680 "dma_device_type": 2 00:16:32.680 } 00:16:32.680 ], 00:16:32.680 "driver_specific": { 00:16:32.680 "raid": { 00:16:32.680 "uuid": "565fbc4f-febf-4fcf-ad03-df377476a933", 00:16:32.680 "strip_size_kb": 0, 00:16:32.680 "state": "online", 00:16:32.680 "raid_level": "raid1", 00:16:32.680 "superblock": false, 00:16:32.680 "num_base_bdevs": 4, 00:16:32.680 "num_base_bdevs_discovered": 4, 00:16:32.680 "num_base_bdevs_operational": 4, 00:16:32.680 "base_bdevs_list": [ 00:16:32.680 { 00:16:32.680 "name": "NewBaseBdev", 00:16:32.680 "uuid": "ea51b56e-bed2-40e0-a4e8-e0507f5272f6", 00:16:32.680 "is_configured": true, 00:16:32.680 "data_offset": 0, 00:16:32.680 "data_size": 65536 00:16:32.680 }, 00:16:32.680 { 00:16:32.680 "name": "BaseBdev2", 00:16:32.680 "uuid": "5e13ffb0-e25d-485f-adf0-ef5fb8338216", 00:16:32.680 "is_configured": true, 00:16:32.680 "data_offset": 0, 00:16:32.680 "data_size": 65536 00:16:32.680 }, 00:16:32.680 { 00:16:32.680 "name": "BaseBdev3", 00:16:32.680 "uuid": "4cbe8b17-fc22-4496-a185-cd360eb9a905", 00:16:32.680 "is_configured": true, 00:16:32.680 "data_offset": 0, 00:16:32.680 "data_size": 65536 00:16:32.680 }, 00:16:32.680 { 00:16:32.680 "name": "BaseBdev4", 00:16:32.680 "uuid": "fa14cbdc-bb92-4f50-8215-3adf6b81891f", 00:16:32.680 "is_configured": true, 00:16:32.680 "data_offset": 0, 00:16:32.680 "data_size": 65536 00:16:32.680 } 00:16:32.680 ] 00:16:32.680 } 00:16:32.680 } 00:16:32.680 }' 00:16:32.680 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:32.680 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:32.680 BaseBdev2 00:16:32.680 BaseBdev3 00:16:32.680 BaseBdev4' 00:16:32.680 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.680 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:32.680 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.680 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:32.680 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.680 06:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.680 06:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.680 06:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.680 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.680 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.680 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.680 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:32.680 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.680 06:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.680 06:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.680 06:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.680 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.680 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.680 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.680 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:32.680 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.680 06:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.680 06:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.680 06:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.938 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.938 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.938 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.938 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.938 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:32.938 06:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.938 06:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.938 06:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.938 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.938 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.938 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:32.938 06:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.938 06:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.938 [2024-12-06 06:42:51.396234] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:32.938 [2024-12-06 06:42:51.396269] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.938 [2024-12-06 06:42:51.396399] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.938 [2024-12-06 06:42:51.396788] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.938 [2024-12-06 06:42:51.396812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:32.938 06:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.938 06:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73504 00:16:32.938 06:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73504 ']' 00:16:32.938 06:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73504 00:16:32.938 06:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:32.938 06:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:32.938 06:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73504 00:16:32.938 06:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:32.938 06:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:32.938 06:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73504' 00:16:32.938 killing process with pid 73504 00:16:32.938 06:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73504 00:16:32.938 [2024-12-06 06:42:51.440039] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:32.938 06:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73504 00:16:33.196 [2024-12-06 06:42:51.794827] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:34.573 00:16:34.573 real 0m12.822s 00:16:34.573 user 0m21.253s 00:16:34.573 sys 0m1.725s 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.573 ************************************ 00:16:34.573 END TEST raid_state_function_test 00:16:34.573 ************************************ 00:16:34.573 06:42:52 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:16:34.573 06:42:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:34.573 06:42:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:34.573 06:42:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:34.573 ************************************ 00:16:34.573 START TEST raid_state_function_test_sb 00:16:34.573 ************************************ 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:34.573 Process raid pid: 74182 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74182 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74182' 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74182 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74182 ']' 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:34.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:34.573 06:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.573 [2024-12-06 06:42:53.002989] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:16:34.573 [2024-12-06 06:42:53.003340] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.573 [2024-12-06 06:42:53.175970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.832 [2024-12-06 06:42:53.308544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.091 [2024-12-06 06:42:53.514678] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:35.092 [2024-12-06 06:42:53.514872] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:35.350 06:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:35.350 06:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:35.350 06:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:35.350 06:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.350 06:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.610 [2024-12-06 06:42:54.001404] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:35.610 [2024-12-06 06:42:54.001625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:35.610 [2024-12-06 06:42:54.001803] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:35.610 [2024-12-06 06:42:54.001868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:35.610 [2024-12-06 06:42:54.002087] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:35.610 [2024-12-06 06:42:54.002157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:35.610 [2024-12-06 06:42:54.002209] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:35.610 [2024-12-06 06:42:54.002253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:35.610 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.610 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:35.610 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.610 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.610 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.610 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.610 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:35.610 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.610 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.610 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.610 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.610 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.610 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.610 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.610 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.610 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.610 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.610 "name": "Existed_Raid", 00:16:35.610 "uuid": "ebcebe78-677a-4c57-82f0-00a6e2c91193", 00:16:35.610 "strip_size_kb": 0, 00:16:35.610 "state": "configuring", 00:16:35.610 "raid_level": "raid1", 00:16:35.610 "superblock": true, 00:16:35.610 "num_base_bdevs": 4, 00:16:35.610 "num_base_bdevs_discovered": 0, 00:16:35.610 "num_base_bdevs_operational": 4, 00:16:35.610 "base_bdevs_list": [ 00:16:35.610 { 00:16:35.610 "name": "BaseBdev1", 00:16:35.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.610 "is_configured": false, 00:16:35.610 "data_offset": 0, 00:16:35.610 "data_size": 0 00:16:35.610 }, 00:16:35.610 { 00:16:35.610 "name": "BaseBdev2", 00:16:35.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.610 "is_configured": false, 00:16:35.610 "data_offset": 0, 00:16:35.610 "data_size": 0 00:16:35.610 }, 00:16:35.610 { 00:16:35.610 "name": "BaseBdev3", 00:16:35.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.610 "is_configured": false, 00:16:35.610 "data_offset": 0, 00:16:35.610 "data_size": 0 00:16:35.610 }, 00:16:35.610 { 00:16:35.610 "name": "BaseBdev4", 00:16:35.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.610 "is_configured": false, 00:16:35.610 "data_offset": 0, 00:16:35.610 "data_size": 0 00:16:35.610 } 00:16:35.610 ] 00:16:35.610 }' 00:16:35.610 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.610 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.908 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:35.908 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.908 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.908 [2024-12-06 06:42:54.513487] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:35.908 [2024-12-06 06:42:54.513559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:35.908 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.908 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:35.908 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.908 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.908 [2024-12-06 06:42:54.521469] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:35.908 [2024-12-06 06:42:54.521541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:35.908 [2024-12-06 06:42:54.521564] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:35.908 [2024-12-06 06:42:54.521589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:35.908 [2024-12-06 06:42:54.521599] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:35.908 [2024-12-06 06:42:54.521614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:35.908 [2024-12-06 06:42:54.521624] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:35.909 [2024-12-06 06:42:54.521639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:35.909 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.909 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:35.909 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.909 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.176 [2024-12-06 06:42:54.566312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:36.176 BaseBdev1 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.176 [ 00:16:36.176 { 00:16:36.176 "name": "BaseBdev1", 00:16:36.176 "aliases": [ 00:16:36.176 "f8318721-41db-4475-a975-c44e3568cfec" 00:16:36.176 ], 00:16:36.176 "product_name": "Malloc disk", 00:16:36.176 "block_size": 512, 00:16:36.176 "num_blocks": 65536, 00:16:36.176 "uuid": "f8318721-41db-4475-a975-c44e3568cfec", 00:16:36.176 "assigned_rate_limits": { 00:16:36.176 "rw_ios_per_sec": 0, 00:16:36.176 "rw_mbytes_per_sec": 0, 00:16:36.176 "r_mbytes_per_sec": 0, 00:16:36.176 "w_mbytes_per_sec": 0 00:16:36.176 }, 00:16:36.176 "claimed": true, 00:16:36.176 "claim_type": "exclusive_write", 00:16:36.176 "zoned": false, 00:16:36.176 "supported_io_types": { 00:16:36.176 "read": true, 00:16:36.176 "write": true, 00:16:36.176 "unmap": true, 00:16:36.176 "flush": true, 00:16:36.176 "reset": true, 00:16:36.176 "nvme_admin": false, 00:16:36.176 "nvme_io": false, 00:16:36.176 "nvme_io_md": false, 00:16:36.176 "write_zeroes": true, 00:16:36.176 "zcopy": true, 00:16:36.176 "get_zone_info": false, 00:16:36.176 "zone_management": false, 00:16:36.176 "zone_append": false, 00:16:36.176 "compare": false, 00:16:36.176 "compare_and_write": false, 00:16:36.176 "abort": true, 00:16:36.176 "seek_hole": false, 00:16:36.176 "seek_data": false, 00:16:36.176 "copy": true, 00:16:36.176 "nvme_iov_md": false 00:16:36.176 }, 00:16:36.176 "memory_domains": [ 00:16:36.176 { 00:16:36.176 "dma_device_id": "system", 00:16:36.176 "dma_device_type": 1 00:16:36.176 }, 00:16:36.176 { 00:16:36.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.176 "dma_device_type": 2 00:16:36.176 } 00:16:36.176 ], 00:16:36.176 "driver_specific": {} 00:16:36.176 } 00:16:36.176 ] 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.176 "name": "Existed_Raid", 00:16:36.176 "uuid": "64e4a97f-a3c3-4ad1-b299-dd47a96622f0", 00:16:36.176 "strip_size_kb": 0, 00:16:36.176 "state": "configuring", 00:16:36.176 "raid_level": "raid1", 00:16:36.176 "superblock": true, 00:16:36.176 "num_base_bdevs": 4, 00:16:36.176 "num_base_bdevs_discovered": 1, 00:16:36.176 "num_base_bdevs_operational": 4, 00:16:36.176 "base_bdevs_list": [ 00:16:36.176 { 00:16:36.176 "name": "BaseBdev1", 00:16:36.176 "uuid": "f8318721-41db-4475-a975-c44e3568cfec", 00:16:36.176 "is_configured": true, 00:16:36.176 "data_offset": 2048, 00:16:36.176 "data_size": 63488 00:16:36.176 }, 00:16:36.176 { 00:16:36.176 "name": "BaseBdev2", 00:16:36.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.176 "is_configured": false, 00:16:36.176 "data_offset": 0, 00:16:36.176 "data_size": 0 00:16:36.176 }, 00:16:36.176 { 00:16:36.176 "name": "BaseBdev3", 00:16:36.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.176 "is_configured": false, 00:16:36.176 "data_offset": 0, 00:16:36.176 "data_size": 0 00:16:36.176 }, 00:16:36.176 { 00:16:36.176 "name": "BaseBdev4", 00:16:36.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.176 "is_configured": false, 00:16:36.176 "data_offset": 0, 00:16:36.176 "data_size": 0 00:16:36.176 } 00:16:36.176 ] 00:16:36.176 }' 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.176 06:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.742 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:36.742 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.742 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.742 [2024-12-06 06:42:55.134561] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:36.742 [2024-12-06 06:42:55.134625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:36.742 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.742 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:36.742 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.742 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.742 [2024-12-06 06:42:55.142606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:36.742 [2024-12-06 06:42:55.145141] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:36.742 [2024-12-06 06:42:55.145197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:36.742 [2024-12-06 06:42:55.145214] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:36.742 [2024-12-06 06:42:55.145232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:36.742 [2024-12-06 06:42:55.145243] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:36.742 [2024-12-06 06:42:55.145256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:36.742 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.742 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:36.742 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:36.742 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:36.742 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.742 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.742 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.742 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.742 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:36.742 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.742 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.742 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.742 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.742 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.742 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.742 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.742 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.742 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.742 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.742 "name": "Existed_Raid", 00:16:36.742 "uuid": "3d409696-f777-42c6-89b4-b86e5032f32e", 00:16:36.742 "strip_size_kb": 0, 00:16:36.742 "state": "configuring", 00:16:36.742 "raid_level": "raid1", 00:16:36.742 "superblock": true, 00:16:36.742 "num_base_bdevs": 4, 00:16:36.742 "num_base_bdevs_discovered": 1, 00:16:36.742 "num_base_bdevs_operational": 4, 00:16:36.742 "base_bdevs_list": [ 00:16:36.742 { 00:16:36.742 "name": "BaseBdev1", 00:16:36.742 "uuid": "f8318721-41db-4475-a975-c44e3568cfec", 00:16:36.742 "is_configured": true, 00:16:36.742 "data_offset": 2048, 00:16:36.742 "data_size": 63488 00:16:36.742 }, 00:16:36.742 { 00:16:36.742 "name": "BaseBdev2", 00:16:36.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.742 "is_configured": false, 00:16:36.742 "data_offset": 0, 00:16:36.742 "data_size": 0 00:16:36.742 }, 00:16:36.742 { 00:16:36.742 "name": "BaseBdev3", 00:16:36.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.742 "is_configured": false, 00:16:36.742 "data_offset": 0, 00:16:36.742 "data_size": 0 00:16:36.742 }, 00:16:36.742 { 00:16:36.742 "name": "BaseBdev4", 00:16:36.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.742 "is_configured": false, 00:16:36.742 "data_offset": 0, 00:16:36.742 "data_size": 0 00:16:36.742 } 00:16:36.742 ] 00:16:36.742 }' 00:16:36.742 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.742 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.308 [2024-12-06 06:42:55.684869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:37.308 BaseBdev2 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.308 [ 00:16:37.308 { 00:16:37.308 "name": "BaseBdev2", 00:16:37.308 "aliases": [ 00:16:37.308 "324aa347-7c34-4d00-9a92-ffff3121a68a" 00:16:37.308 ], 00:16:37.308 "product_name": "Malloc disk", 00:16:37.308 "block_size": 512, 00:16:37.308 "num_blocks": 65536, 00:16:37.308 "uuid": "324aa347-7c34-4d00-9a92-ffff3121a68a", 00:16:37.308 "assigned_rate_limits": { 00:16:37.308 "rw_ios_per_sec": 0, 00:16:37.308 "rw_mbytes_per_sec": 0, 00:16:37.308 "r_mbytes_per_sec": 0, 00:16:37.308 "w_mbytes_per_sec": 0 00:16:37.308 }, 00:16:37.308 "claimed": true, 00:16:37.308 "claim_type": "exclusive_write", 00:16:37.308 "zoned": false, 00:16:37.308 "supported_io_types": { 00:16:37.308 "read": true, 00:16:37.308 "write": true, 00:16:37.308 "unmap": true, 00:16:37.308 "flush": true, 00:16:37.308 "reset": true, 00:16:37.308 "nvme_admin": false, 00:16:37.308 "nvme_io": false, 00:16:37.308 "nvme_io_md": false, 00:16:37.308 "write_zeroes": true, 00:16:37.308 "zcopy": true, 00:16:37.308 "get_zone_info": false, 00:16:37.308 "zone_management": false, 00:16:37.308 "zone_append": false, 00:16:37.308 "compare": false, 00:16:37.308 "compare_and_write": false, 00:16:37.308 "abort": true, 00:16:37.308 "seek_hole": false, 00:16:37.308 "seek_data": false, 00:16:37.308 "copy": true, 00:16:37.308 "nvme_iov_md": false 00:16:37.308 }, 00:16:37.308 "memory_domains": [ 00:16:37.308 { 00:16:37.308 "dma_device_id": "system", 00:16:37.308 "dma_device_type": 1 00:16:37.308 }, 00:16:37.308 { 00:16:37.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.308 "dma_device_type": 2 00:16:37.308 } 00:16:37.308 ], 00:16:37.308 "driver_specific": {} 00:16:37.308 } 00:16:37.308 ] 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.308 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.309 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.309 "name": "Existed_Raid", 00:16:37.309 "uuid": "3d409696-f777-42c6-89b4-b86e5032f32e", 00:16:37.309 "strip_size_kb": 0, 00:16:37.309 "state": "configuring", 00:16:37.309 "raid_level": "raid1", 00:16:37.309 "superblock": true, 00:16:37.309 "num_base_bdevs": 4, 00:16:37.309 "num_base_bdevs_discovered": 2, 00:16:37.309 "num_base_bdevs_operational": 4, 00:16:37.309 "base_bdevs_list": [ 00:16:37.309 { 00:16:37.309 "name": "BaseBdev1", 00:16:37.309 "uuid": "f8318721-41db-4475-a975-c44e3568cfec", 00:16:37.309 "is_configured": true, 00:16:37.309 "data_offset": 2048, 00:16:37.309 "data_size": 63488 00:16:37.309 }, 00:16:37.309 { 00:16:37.309 "name": "BaseBdev2", 00:16:37.309 "uuid": "324aa347-7c34-4d00-9a92-ffff3121a68a", 00:16:37.309 "is_configured": true, 00:16:37.309 "data_offset": 2048, 00:16:37.309 "data_size": 63488 00:16:37.309 }, 00:16:37.309 { 00:16:37.309 "name": "BaseBdev3", 00:16:37.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.309 "is_configured": false, 00:16:37.309 "data_offset": 0, 00:16:37.309 "data_size": 0 00:16:37.309 }, 00:16:37.309 { 00:16:37.309 "name": "BaseBdev4", 00:16:37.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.309 "is_configured": false, 00:16:37.309 "data_offset": 0, 00:16:37.309 "data_size": 0 00:16:37.309 } 00:16:37.309 ] 00:16:37.309 }' 00:16:37.309 06:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.309 06:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.875 [2024-12-06 06:42:56.277718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:37.875 BaseBdev3 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.875 [ 00:16:37.875 { 00:16:37.875 "name": "BaseBdev3", 00:16:37.875 "aliases": [ 00:16:37.875 "bc9bdbaf-6ca5-4f35-b125-13c812ae2f45" 00:16:37.875 ], 00:16:37.875 "product_name": "Malloc disk", 00:16:37.875 "block_size": 512, 00:16:37.875 "num_blocks": 65536, 00:16:37.875 "uuid": "bc9bdbaf-6ca5-4f35-b125-13c812ae2f45", 00:16:37.875 "assigned_rate_limits": { 00:16:37.875 "rw_ios_per_sec": 0, 00:16:37.875 "rw_mbytes_per_sec": 0, 00:16:37.875 "r_mbytes_per_sec": 0, 00:16:37.875 "w_mbytes_per_sec": 0 00:16:37.875 }, 00:16:37.875 "claimed": true, 00:16:37.875 "claim_type": "exclusive_write", 00:16:37.875 "zoned": false, 00:16:37.875 "supported_io_types": { 00:16:37.875 "read": true, 00:16:37.875 "write": true, 00:16:37.875 "unmap": true, 00:16:37.875 "flush": true, 00:16:37.875 "reset": true, 00:16:37.875 "nvme_admin": false, 00:16:37.875 "nvme_io": false, 00:16:37.875 "nvme_io_md": false, 00:16:37.875 "write_zeroes": true, 00:16:37.875 "zcopy": true, 00:16:37.875 "get_zone_info": false, 00:16:37.875 "zone_management": false, 00:16:37.875 "zone_append": false, 00:16:37.875 "compare": false, 00:16:37.875 "compare_and_write": false, 00:16:37.875 "abort": true, 00:16:37.875 "seek_hole": false, 00:16:37.875 "seek_data": false, 00:16:37.875 "copy": true, 00:16:37.875 "nvme_iov_md": false 00:16:37.875 }, 00:16:37.875 "memory_domains": [ 00:16:37.875 { 00:16:37.875 "dma_device_id": "system", 00:16:37.875 "dma_device_type": 1 00:16:37.875 }, 00:16:37.875 { 00:16:37.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.875 "dma_device_type": 2 00:16:37.875 } 00:16:37.875 ], 00:16:37.875 "driver_specific": {} 00:16:37.875 } 00:16:37.875 ] 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.875 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.875 "name": "Existed_Raid", 00:16:37.875 "uuid": "3d409696-f777-42c6-89b4-b86e5032f32e", 00:16:37.875 "strip_size_kb": 0, 00:16:37.875 "state": "configuring", 00:16:37.875 "raid_level": "raid1", 00:16:37.875 "superblock": true, 00:16:37.875 "num_base_bdevs": 4, 00:16:37.875 "num_base_bdevs_discovered": 3, 00:16:37.875 "num_base_bdevs_operational": 4, 00:16:37.875 "base_bdevs_list": [ 00:16:37.875 { 00:16:37.875 "name": "BaseBdev1", 00:16:37.875 "uuid": "f8318721-41db-4475-a975-c44e3568cfec", 00:16:37.875 "is_configured": true, 00:16:37.875 "data_offset": 2048, 00:16:37.875 "data_size": 63488 00:16:37.875 }, 00:16:37.876 { 00:16:37.876 "name": "BaseBdev2", 00:16:37.876 "uuid": "324aa347-7c34-4d00-9a92-ffff3121a68a", 00:16:37.876 "is_configured": true, 00:16:37.876 "data_offset": 2048, 00:16:37.876 "data_size": 63488 00:16:37.876 }, 00:16:37.876 { 00:16:37.876 "name": "BaseBdev3", 00:16:37.876 "uuid": "bc9bdbaf-6ca5-4f35-b125-13c812ae2f45", 00:16:37.876 "is_configured": true, 00:16:37.876 "data_offset": 2048, 00:16:37.876 "data_size": 63488 00:16:37.876 }, 00:16:37.876 { 00:16:37.876 "name": "BaseBdev4", 00:16:37.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.876 "is_configured": false, 00:16:37.876 "data_offset": 0, 00:16:37.876 "data_size": 0 00:16:37.876 } 00:16:37.876 ] 00:16:37.876 }' 00:16:37.876 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.876 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.441 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:38.441 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.441 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.441 [2024-12-06 06:42:56.853147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:38.441 [2024-12-06 06:42:56.853500] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:38.441 [2024-12-06 06:42:56.853547] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:38.441 [2024-12-06 06:42:56.853917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:38.441 BaseBdev4 00:16:38.441 [2024-12-06 06:42:56.854126] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:38.441 [2024-12-06 06:42:56.854149] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:38.441 [2024-12-06 06:42:56.854338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.441 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.441 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:38.441 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:38.441 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:38.441 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:38.441 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:38.441 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:38.441 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:38.441 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.441 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.441 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.441 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:38.441 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.441 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.441 [ 00:16:38.441 { 00:16:38.441 "name": "BaseBdev4", 00:16:38.441 "aliases": [ 00:16:38.441 "de349d02-c5e4-4ab3-ba18-88020e81b6f4" 00:16:38.441 ], 00:16:38.441 "product_name": "Malloc disk", 00:16:38.441 "block_size": 512, 00:16:38.441 "num_blocks": 65536, 00:16:38.441 "uuid": "de349d02-c5e4-4ab3-ba18-88020e81b6f4", 00:16:38.441 "assigned_rate_limits": { 00:16:38.441 "rw_ios_per_sec": 0, 00:16:38.441 "rw_mbytes_per_sec": 0, 00:16:38.441 "r_mbytes_per_sec": 0, 00:16:38.441 "w_mbytes_per_sec": 0 00:16:38.441 }, 00:16:38.441 "claimed": true, 00:16:38.441 "claim_type": "exclusive_write", 00:16:38.442 "zoned": false, 00:16:38.442 "supported_io_types": { 00:16:38.442 "read": true, 00:16:38.442 "write": true, 00:16:38.442 "unmap": true, 00:16:38.442 "flush": true, 00:16:38.442 "reset": true, 00:16:38.442 "nvme_admin": false, 00:16:38.442 "nvme_io": false, 00:16:38.442 "nvme_io_md": false, 00:16:38.442 "write_zeroes": true, 00:16:38.442 "zcopy": true, 00:16:38.442 "get_zone_info": false, 00:16:38.442 "zone_management": false, 00:16:38.442 "zone_append": false, 00:16:38.442 "compare": false, 00:16:38.442 "compare_and_write": false, 00:16:38.442 "abort": true, 00:16:38.442 "seek_hole": false, 00:16:38.442 "seek_data": false, 00:16:38.442 "copy": true, 00:16:38.442 "nvme_iov_md": false 00:16:38.442 }, 00:16:38.442 "memory_domains": [ 00:16:38.442 { 00:16:38.442 "dma_device_id": "system", 00:16:38.442 "dma_device_type": 1 00:16:38.442 }, 00:16:38.442 { 00:16:38.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.442 "dma_device_type": 2 00:16:38.442 } 00:16:38.442 ], 00:16:38.442 "driver_specific": {} 00:16:38.442 } 00:16:38.442 ] 00:16:38.442 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.442 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:38.442 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:38.442 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:38.442 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:38.442 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.442 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.442 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.442 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.442 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.442 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.442 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.442 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.442 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.442 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.442 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.442 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.442 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.442 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.442 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.442 "name": "Existed_Raid", 00:16:38.442 "uuid": "3d409696-f777-42c6-89b4-b86e5032f32e", 00:16:38.442 "strip_size_kb": 0, 00:16:38.442 "state": "online", 00:16:38.442 "raid_level": "raid1", 00:16:38.442 "superblock": true, 00:16:38.442 "num_base_bdevs": 4, 00:16:38.442 "num_base_bdevs_discovered": 4, 00:16:38.442 "num_base_bdevs_operational": 4, 00:16:38.442 "base_bdevs_list": [ 00:16:38.442 { 00:16:38.442 "name": "BaseBdev1", 00:16:38.442 "uuid": "f8318721-41db-4475-a975-c44e3568cfec", 00:16:38.442 "is_configured": true, 00:16:38.442 "data_offset": 2048, 00:16:38.442 "data_size": 63488 00:16:38.442 }, 00:16:38.442 { 00:16:38.442 "name": "BaseBdev2", 00:16:38.442 "uuid": "324aa347-7c34-4d00-9a92-ffff3121a68a", 00:16:38.442 "is_configured": true, 00:16:38.442 "data_offset": 2048, 00:16:38.442 "data_size": 63488 00:16:38.442 }, 00:16:38.442 { 00:16:38.442 "name": "BaseBdev3", 00:16:38.442 "uuid": "bc9bdbaf-6ca5-4f35-b125-13c812ae2f45", 00:16:38.442 "is_configured": true, 00:16:38.442 "data_offset": 2048, 00:16:38.442 "data_size": 63488 00:16:38.442 }, 00:16:38.442 { 00:16:38.442 "name": "BaseBdev4", 00:16:38.442 "uuid": "de349d02-c5e4-4ab3-ba18-88020e81b6f4", 00:16:38.442 "is_configured": true, 00:16:38.442 "data_offset": 2048, 00:16:38.442 "data_size": 63488 00:16:38.442 } 00:16:38.442 ] 00:16:38.442 }' 00:16:38.442 06:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.442 06:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.014 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:39.014 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:39.014 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:39.014 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:39.014 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:39.014 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:39.014 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:39.014 06:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.014 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:39.014 06:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.014 [2024-12-06 06:42:57.413849] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:39.014 06:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.014 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:39.014 "name": "Existed_Raid", 00:16:39.014 "aliases": [ 00:16:39.014 "3d409696-f777-42c6-89b4-b86e5032f32e" 00:16:39.014 ], 00:16:39.014 "product_name": "Raid Volume", 00:16:39.014 "block_size": 512, 00:16:39.014 "num_blocks": 63488, 00:16:39.014 "uuid": "3d409696-f777-42c6-89b4-b86e5032f32e", 00:16:39.014 "assigned_rate_limits": { 00:16:39.014 "rw_ios_per_sec": 0, 00:16:39.014 "rw_mbytes_per_sec": 0, 00:16:39.014 "r_mbytes_per_sec": 0, 00:16:39.014 "w_mbytes_per_sec": 0 00:16:39.014 }, 00:16:39.014 "claimed": false, 00:16:39.014 "zoned": false, 00:16:39.014 "supported_io_types": { 00:16:39.014 "read": true, 00:16:39.014 "write": true, 00:16:39.014 "unmap": false, 00:16:39.014 "flush": false, 00:16:39.014 "reset": true, 00:16:39.014 "nvme_admin": false, 00:16:39.014 "nvme_io": false, 00:16:39.014 "nvme_io_md": false, 00:16:39.014 "write_zeroes": true, 00:16:39.014 "zcopy": false, 00:16:39.014 "get_zone_info": false, 00:16:39.014 "zone_management": false, 00:16:39.014 "zone_append": false, 00:16:39.014 "compare": false, 00:16:39.014 "compare_and_write": false, 00:16:39.014 "abort": false, 00:16:39.014 "seek_hole": false, 00:16:39.014 "seek_data": false, 00:16:39.014 "copy": false, 00:16:39.014 "nvme_iov_md": false 00:16:39.014 }, 00:16:39.014 "memory_domains": [ 00:16:39.014 { 00:16:39.014 "dma_device_id": "system", 00:16:39.014 "dma_device_type": 1 00:16:39.014 }, 00:16:39.014 { 00:16:39.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.015 "dma_device_type": 2 00:16:39.015 }, 00:16:39.015 { 00:16:39.015 "dma_device_id": "system", 00:16:39.015 "dma_device_type": 1 00:16:39.015 }, 00:16:39.015 { 00:16:39.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.015 "dma_device_type": 2 00:16:39.015 }, 00:16:39.015 { 00:16:39.015 "dma_device_id": "system", 00:16:39.015 "dma_device_type": 1 00:16:39.015 }, 00:16:39.015 { 00:16:39.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.015 "dma_device_type": 2 00:16:39.015 }, 00:16:39.015 { 00:16:39.015 "dma_device_id": "system", 00:16:39.015 "dma_device_type": 1 00:16:39.015 }, 00:16:39.015 { 00:16:39.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.015 "dma_device_type": 2 00:16:39.015 } 00:16:39.015 ], 00:16:39.015 "driver_specific": { 00:16:39.015 "raid": { 00:16:39.015 "uuid": "3d409696-f777-42c6-89b4-b86e5032f32e", 00:16:39.015 "strip_size_kb": 0, 00:16:39.015 "state": "online", 00:16:39.015 "raid_level": "raid1", 00:16:39.015 "superblock": true, 00:16:39.015 "num_base_bdevs": 4, 00:16:39.015 "num_base_bdevs_discovered": 4, 00:16:39.015 "num_base_bdevs_operational": 4, 00:16:39.015 "base_bdevs_list": [ 00:16:39.015 { 00:16:39.015 "name": "BaseBdev1", 00:16:39.015 "uuid": "f8318721-41db-4475-a975-c44e3568cfec", 00:16:39.015 "is_configured": true, 00:16:39.015 "data_offset": 2048, 00:16:39.015 "data_size": 63488 00:16:39.015 }, 00:16:39.015 { 00:16:39.015 "name": "BaseBdev2", 00:16:39.015 "uuid": "324aa347-7c34-4d00-9a92-ffff3121a68a", 00:16:39.015 "is_configured": true, 00:16:39.015 "data_offset": 2048, 00:16:39.015 "data_size": 63488 00:16:39.015 }, 00:16:39.015 { 00:16:39.015 "name": "BaseBdev3", 00:16:39.015 "uuid": "bc9bdbaf-6ca5-4f35-b125-13c812ae2f45", 00:16:39.015 "is_configured": true, 00:16:39.015 "data_offset": 2048, 00:16:39.015 "data_size": 63488 00:16:39.015 }, 00:16:39.015 { 00:16:39.015 "name": "BaseBdev4", 00:16:39.015 "uuid": "de349d02-c5e4-4ab3-ba18-88020e81b6f4", 00:16:39.015 "is_configured": true, 00:16:39.015 "data_offset": 2048, 00:16:39.015 "data_size": 63488 00:16:39.015 } 00:16:39.015 ] 00:16:39.015 } 00:16:39.015 } 00:16:39.015 }' 00:16:39.015 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:39.015 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:39.015 BaseBdev2 00:16:39.015 BaseBdev3 00:16:39.015 BaseBdev4' 00:16:39.015 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.015 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:39.015 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.015 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.015 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:39.015 06:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.015 06:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.015 06:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.015 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.015 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.015 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.015 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.015 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:39.015 06:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.015 06:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.015 06:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.273 [2024-12-06 06:42:57.773626] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.273 06:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.530 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.530 "name": "Existed_Raid", 00:16:39.530 "uuid": "3d409696-f777-42c6-89b4-b86e5032f32e", 00:16:39.530 "strip_size_kb": 0, 00:16:39.530 "state": "online", 00:16:39.530 "raid_level": "raid1", 00:16:39.530 "superblock": true, 00:16:39.530 "num_base_bdevs": 4, 00:16:39.530 "num_base_bdevs_discovered": 3, 00:16:39.530 "num_base_bdevs_operational": 3, 00:16:39.530 "base_bdevs_list": [ 00:16:39.530 { 00:16:39.530 "name": null, 00:16:39.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.530 "is_configured": false, 00:16:39.530 "data_offset": 0, 00:16:39.530 "data_size": 63488 00:16:39.530 }, 00:16:39.530 { 00:16:39.530 "name": "BaseBdev2", 00:16:39.530 "uuid": "324aa347-7c34-4d00-9a92-ffff3121a68a", 00:16:39.530 "is_configured": true, 00:16:39.530 "data_offset": 2048, 00:16:39.530 "data_size": 63488 00:16:39.530 }, 00:16:39.530 { 00:16:39.530 "name": "BaseBdev3", 00:16:39.530 "uuid": "bc9bdbaf-6ca5-4f35-b125-13c812ae2f45", 00:16:39.530 "is_configured": true, 00:16:39.530 "data_offset": 2048, 00:16:39.530 "data_size": 63488 00:16:39.530 }, 00:16:39.530 { 00:16:39.530 "name": "BaseBdev4", 00:16:39.530 "uuid": "de349d02-c5e4-4ab3-ba18-88020e81b6f4", 00:16:39.530 "is_configured": true, 00:16:39.530 "data_offset": 2048, 00:16:39.530 "data_size": 63488 00:16:39.530 } 00:16:39.530 ] 00:16:39.530 }' 00:16:39.530 06:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.530 06:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.788 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:39.788 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:39.788 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.788 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:39.788 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.788 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.788 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.788 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:39.788 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:39.788 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:39.788 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.788 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.046 [2024-12-06 06:42:58.433618] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:40.046 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.046 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:40.046 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:40.046 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.046 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.046 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.046 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:40.046 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.046 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:40.046 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:40.046 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:40.046 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.046 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.046 [2024-12-06 06:42:58.586751] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:40.046 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.046 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:40.046 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:40.046 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.046 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:40.046 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.046 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.333 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.333 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:40.333 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:40.333 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:40.333 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.333 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.333 [2024-12-06 06:42:58.732642] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:40.333 [2024-12-06 06:42:58.732773] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:40.333 [2024-12-06 06:42:58.817215] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:40.333 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.333 [2024-12-06 06:42:58.818597] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:40.333 [2024-12-06 06:42:58.818631] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:40.333 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:40.333 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:40.333 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.333 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:40.333 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.333 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.333 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.333 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:40.333 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:40.333 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:40.334 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:40.334 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:40.334 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:40.334 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.334 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.334 BaseBdev2 00:16:40.334 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.334 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:40.334 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:40.334 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:40.334 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:40.334 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:40.334 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:40.334 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:40.334 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.334 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.334 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.334 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:40.334 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.334 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.334 [ 00:16:40.334 { 00:16:40.334 "name": "BaseBdev2", 00:16:40.334 "aliases": [ 00:16:40.334 "af9e3575-a78e-47e2-be9d-15c43c03a3b8" 00:16:40.334 ], 00:16:40.334 "product_name": "Malloc disk", 00:16:40.334 "block_size": 512, 00:16:40.334 "num_blocks": 65536, 00:16:40.334 "uuid": "af9e3575-a78e-47e2-be9d-15c43c03a3b8", 00:16:40.334 "assigned_rate_limits": { 00:16:40.334 "rw_ios_per_sec": 0, 00:16:40.334 "rw_mbytes_per_sec": 0, 00:16:40.334 "r_mbytes_per_sec": 0, 00:16:40.334 "w_mbytes_per_sec": 0 00:16:40.334 }, 00:16:40.334 "claimed": false, 00:16:40.334 "zoned": false, 00:16:40.334 "supported_io_types": { 00:16:40.334 "read": true, 00:16:40.334 "write": true, 00:16:40.334 "unmap": true, 00:16:40.334 "flush": true, 00:16:40.334 "reset": true, 00:16:40.334 "nvme_admin": false, 00:16:40.334 "nvme_io": false, 00:16:40.334 "nvme_io_md": false, 00:16:40.334 "write_zeroes": true, 00:16:40.334 "zcopy": true, 00:16:40.334 "get_zone_info": false, 00:16:40.334 "zone_management": false, 00:16:40.334 "zone_append": false, 00:16:40.334 "compare": false, 00:16:40.334 "compare_and_write": false, 00:16:40.334 "abort": true, 00:16:40.334 "seek_hole": false, 00:16:40.334 "seek_data": false, 00:16:40.334 "copy": true, 00:16:40.334 "nvme_iov_md": false 00:16:40.334 }, 00:16:40.334 "memory_domains": [ 00:16:40.334 { 00:16:40.334 "dma_device_id": "system", 00:16:40.334 "dma_device_type": 1 00:16:40.334 }, 00:16:40.334 { 00:16:40.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.334 "dma_device_type": 2 00:16:40.334 } 00:16:40.334 ], 00:16:40.334 "driver_specific": {} 00:16:40.334 } 00:16:40.334 ] 00:16:40.334 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.334 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:40.334 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:40.334 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:40.334 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:40.334 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.334 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.626 BaseBdev3 00:16:40.626 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.626 06:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:40.626 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:40.626 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:40.626 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:40.626 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:40.626 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:40.626 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:40.626 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.626 06:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.626 [ 00:16:40.626 { 00:16:40.626 "name": "BaseBdev3", 00:16:40.626 "aliases": [ 00:16:40.626 "3301c685-7df9-42ef-9097-6896a4bce989" 00:16:40.626 ], 00:16:40.626 "product_name": "Malloc disk", 00:16:40.626 "block_size": 512, 00:16:40.626 "num_blocks": 65536, 00:16:40.626 "uuid": "3301c685-7df9-42ef-9097-6896a4bce989", 00:16:40.626 "assigned_rate_limits": { 00:16:40.626 "rw_ios_per_sec": 0, 00:16:40.626 "rw_mbytes_per_sec": 0, 00:16:40.626 "r_mbytes_per_sec": 0, 00:16:40.626 "w_mbytes_per_sec": 0 00:16:40.626 }, 00:16:40.626 "claimed": false, 00:16:40.626 "zoned": false, 00:16:40.626 "supported_io_types": { 00:16:40.626 "read": true, 00:16:40.626 "write": true, 00:16:40.626 "unmap": true, 00:16:40.626 "flush": true, 00:16:40.626 "reset": true, 00:16:40.626 "nvme_admin": false, 00:16:40.626 "nvme_io": false, 00:16:40.626 "nvme_io_md": false, 00:16:40.626 "write_zeroes": true, 00:16:40.626 "zcopy": true, 00:16:40.626 "get_zone_info": false, 00:16:40.626 "zone_management": false, 00:16:40.626 "zone_append": false, 00:16:40.626 "compare": false, 00:16:40.626 "compare_and_write": false, 00:16:40.626 "abort": true, 00:16:40.626 "seek_hole": false, 00:16:40.626 "seek_data": false, 00:16:40.626 "copy": true, 00:16:40.626 "nvme_iov_md": false 00:16:40.626 }, 00:16:40.626 "memory_domains": [ 00:16:40.626 { 00:16:40.626 "dma_device_id": "system", 00:16:40.626 "dma_device_type": 1 00:16:40.626 }, 00:16:40.626 { 00:16:40.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.626 "dma_device_type": 2 00:16:40.626 } 00:16:40.626 ], 00:16:40.626 "driver_specific": {} 00:16:40.626 } 00:16:40.626 ] 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.626 BaseBdev4 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.626 [ 00:16:40.626 { 00:16:40.626 "name": "BaseBdev4", 00:16:40.626 "aliases": [ 00:16:40.626 "8fced57e-7c88-4e25-9ee0-810736f72ca0" 00:16:40.626 ], 00:16:40.626 "product_name": "Malloc disk", 00:16:40.626 "block_size": 512, 00:16:40.626 "num_blocks": 65536, 00:16:40.626 "uuid": "8fced57e-7c88-4e25-9ee0-810736f72ca0", 00:16:40.626 "assigned_rate_limits": { 00:16:40.626 "rw_ios_per_sec": 0, 00:16:40.626 "rw_mbytes_per_sec": 0, 00:16:40.626 "r_mbytes_per_sec": 0, 00:16:40.626 "w_mbytes_per_sec": 0 00:16:40.626 }, 00:16:40.626 "claimed": false, 00:16:40.626 "zoned": false, 00:16:40.626 "supported_io_types": { 00:16:40.626 "read": true, 00:16:40.626 "write": true, 00:16:40.626 "unmap": true, 00:16:40.626 "flush": true, 00:16:40.626 "reset": true, 00:16:40.626 "nvme_admin": false, 00:16:40.626 "nvme_io": false, 00:16:40.626 "nvme_io_md": false, 00:16:40.626 "write_zeroes": true, 00:16:40.626 "zcopy": true, 00:16:40.626 "get_zone_info": false, 00:16:40.626 "zone_management": false, 00:16:40.626 "zone_append": false, 00:16:40.626 "compare": false, 00:16:40.626 "compare_and_write": false, 00:16:40.626 "abort": true, 00:16:40.626 "seek_hole": false, 00:16:40.626 "seek_data": false, 00:16:40.626 "copy": true, 00:16:40.626 "nvme_iov_md": false 00:16:40.626 }, 00:16:40.626 "memory_domains": [ 00:16:40.626 { 00:16:40.626 "dma_device_id": "system", 00:16:40.626 "dma_device_type": 1 00:16:40.626 }, 00:16:40.626 { 00:16:40.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.626 "dma_device_type": 2 00:16:40.626 } 00:16:40.626 ], 00:16:40.626 "driver_specific": {} 00:16:40.626 } 00:16:40.626 ] 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.626 [2024-12-06 06:42:59.108138] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:40.626 [2024-12-06 06:42:59.108338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:40.626 [2024-12-06 06:42:59.108386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:40.626 [2024-12-06 06:42:59.110895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:40.626 [2024-12-06 06:42:59.110964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.626 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:40.627 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.627 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.627 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.627 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.627 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:40.627 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.627 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.627 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.627 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.627 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.627 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.627 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.627 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.627 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.627 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.627 "name": "Existed_Raid", 00:16:40.627 "uuid": "51a961cb-2514-46f9-8ff1-79127ee4d831", 00:16:40.627 "strip_size_kb": 0, 00:16:40.627 "state": "configuring", 00:16:40.627 "raid_level": "raid1", 00:16:40.627 "superblock": true, 00:16:40.627 "num_base_bdevs": 4, 00:16:40.627 "num_base_bdevs_discovered": 3, 00:16:40.627 "num_base_bdevs_operational": 4, 00:16:40.627 "base_bdevs_list": [ 00:16:40.627 { 00:16:40.627 "name": "BaseBdev1", 00:16:40.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.627 "is_configured": false, 00:16:40.627 "data_offset": 0, 00:16:40.627 "data_size": 0 00:16:40.627 }, 00:16:40.627 { 00:16:40.627 "name": "BaseBdev2", 00:16:40.627 "uuid": "af9e3575-a78e-47e2-be9d-15c43c03a3b8", 00:16:40.627 "is_configured": true, 00:16:40.627 "data_offset": 2048, 00:16:40.627 "data_size": 63488 00:16:40.627 }, 00:16:40.627 { 00:16:40.627 "name": "BaseBdev3", 00:16:40.627 "uuid": "3301c685-7df9-42ef-9097-6896a4bce989", 00:16:40.627 "is_configured": true, 00:16:40.627 "data_offset": 2048, 00:16:40.627 "data_size": 63488 00:16:40.627 }, 00:16:40.627 { 00:16:40.627 "name": "BaseBdev4", 00:16:40.627 "uuid": "8fced57e-7c88-4e25-9ee0-810736f72ca0", 00:16:40.627 "is_configured": true, 00:16:40.627 "data_offset": 2048, 00:16:40.627 "data_size": 63488 00:16:40.627 } 00:16:40.627 ] 00:16:40.627 }' 00:16:40.627 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.627 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.195 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:41.195 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.195 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.195 [2024-12-06 06:42:59.640292] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:41.195 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.195 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:41.195 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.195 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.195 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.195 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.195 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:41.195 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.195 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.195 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.195 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.195 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.195 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.195 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.195 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.195 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.195 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.195 "name": "Existed_Raid", 00:16:41.195 "uuid": "51a961cb-2514-46f9-8ff1-79127ee4d831", 00:16:41.195 "strip_size_kb": 0, 00:16:41.195 "state": "configuring", 00:16:41.195 "raid_level": "raid1", 00:16:41.195 "superblock": true, 00:16:41.195 "num_base_bdevs": 4, 00:16:41.195 "num_base_bdevs_discovered": 2, 00:16:41.195 "num_base_bdevs_operational": 4, 00:16:41.195 "base_bdevs_list": [ 00:16:41.195 { 00:16:41.195 "name": "BaseBdev1", 00:16:41.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.195 "is_configured": false, 00:16:41.195 "data_offset": 0, 00:16:41.195 "data_size": 0 00:16:41.195 }, 00:16:41.195 { 00:16:41.195 "name": null, 00:16:41.195 "uuid": "af9e3575-a78e-47e2-be9d-15c43c03a3b8", 00:16:41.195 "is_configured": false, 00:16:41.195 "data_offset": 0, 00:16:41.195 "data_size": 63488 00:16:41.195 }, 00:16:41.195 { 00:16:41.195 "name": "BaseBdev3", 00:16:41.195 "uuid": "3301c685-7df9-42ef-9097-6896a4bce989", 00:16:41.195 "is_configured": true, 00:16:41.195 "data_offset": 2048, 00:16:41.195 "data_size": 63488 00:16:41.195 }, 00:16:41.195 { 00:16:41.195 "name": "BaseBdev4", 00:16:41.195 "uuid": "8fced57e-7c88-4e25-9ee0-810736f72ca0", 00:16:41.195 "is_configured": true, 00:16:41.195 "data_offset": 2048, 00:16:41.195 "data_size": 63488 00:16:41.195 } 00:16:41.195 ] 00:16:41.195 }' 00:16:41.195 06:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.195 06:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.764 [2024-12-06 06:43:00.214471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:41.764 BaseBdev1 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.764 [ 00:16:41.764 { 00:16:41.764 "name": "BaseBdev1", 00:16:41.764 "aliases": [ 00:16:41.764 "2336ef8a-77a6-4f71-91cb-5b6b7bdc8d4b" 00:16:41.764 ], 00:16:41.764 "product_name": "Malloc disk", 00:16:41.764 "block_size": 512, 00:16:41.764 "num_blocks": 65536, 00:16:41.764 "uuid": "2336ef8a-77a6-4f71-91cb-5b6b7bdc8d4b", 00:16:41.764 "assigned_rate_limits": { 00:16:41.764 "rw_ios_per_sec": 0, 00:16:41.764 "rw_mbytes_per_sec": 0, 00:16:41.764 "r_mbytes_per_sec": 0, 00:16:41.764 "w_mbytes_per_sec": 0 00:16:41.764 }, 00:16:41.764 "claimed": true, 00:16:41.764 "claim_type": "exclusive_write", 00:16:41.764 "zoned": false, 00:16:41.764 "supported_io_types": { 00:16:41.764 "read": true, 00:16:41.764 "write": true, 00:16:41.764 "unmap": true, 00:16:41.764 "flush": true, 00:16:41.764 "reset": true, 00:16:41.764 "nvme_admin": false, 00:16:41.764 "nvme_io": false, 00:16:41.764 "nvme_io_md": false, 00:16:41.764 "write_zeroes": true, 00:16:41.764 "zcopy": true, 00:16:41.764 "get_zone_info": false, 00:16:41.764 "zone_management": false, 00:16:41.764 "zone_append": false, 00:16:41.764 "compare": false, 00:16:41.764 "compare_and_write": false, 00:16:41.764 "abort": true, 00:16:41.764 "seek_hole": false, 00:16:41.764 "seek_data": false, 00:16:41.764 "copy": true, 00:16:41.764 "nvme_iov_md": false 00:16:41.764 }, 00:16:41.764 "memory_domains": [ 00:16:41.764 { 00:16:41.764 "dma_device_id": "system", 00:16:41.764 "dma_device_type": 1 00:16:41.764 }, 00:16:41.764 { 00:16:41.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.764 "dma_device_type": 2 00:16:41.764 } 00:16:41.764 ], 00:16:41.764 "driver_specific": {} 00:16:41.764 } 00:16:41.764 ] 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.764 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.764 "name": "Existed_Raid", 00:16:41.764 "uuid": "51a961cb-2514-46f9-8ff1-79127ee4d831", 00:16:41.764 "strip_size_kb": 0, 00:16:41.764 "state": "configuring", 00:16:41.764 "raid_level": "raid1", 00:16:41.764 "superblock": true, 00:16:41.764 "num_base_bdevs": 4, 00:16:41.764 "num_base_bdevs_discovered": 3, 00:16:41.765 "num_base_bdevs_operational": 4, 00:16:41.765 "base_bdevs_list": [ 00:16:41.765 { 00:16:41.765 "name": "BaseBdev1", 00:16:41.765 "uuid": "2336ef8a-77a6-4f71-91cb-5b6b7bdc8d4b", 00:16:41.765 "is_configured": true, 00:16:41.765 "data_offset": 2048, 00:16:41.765 "data_size": 63488 00:16:41.765 }, 00:16:41.765 { 00:16:41.765 "name": null, 00:16:41.765 "uuid": "af9e3575-a78e-47e2-be9d-15c43c03a3b8", 00:16:41.765 "is_configured": false, 00:16:41.765 "data_offset": 0, 00:16:41.765 "data_size": 63488 00:16:41.765 }, 00:16:41.765 { 00:16:41.765 "name": "BaseBdev3", 00:16:41.765 "uuid": "3301c685-7df9-42ef-9097-6896a4bce989", 00:16:41.765 "is_configured": true, 00:16:41.765 "data_offset": 2048, 00:16:41.765 "data_size": 63488 00:16:41.765 }, 00:16:41.765 { 00:16:41.765 "name": "BaseBdev4", 00:16:41.765 "uuid": "8fced57e-7c88-4e25-9ee0-810736f72ca0", 00:16:41.765 "is_configured": true, 00:16:41.765 "data_offset": 2048, 00:16:41.765 "data_size": 63488 00:16:41.765 } 00:16:41.765 ] 00:16:41.765 }' 00:16:41.765 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.765 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.333 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.333 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.333 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:42.333 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.333 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.333 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:42.333 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:42.333 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.333 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.333 [2024-12-06 06:43:00.818770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:42.333 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.333 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:42.333 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.333 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.333 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.333 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.333 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:42.333 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.333 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.333 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.333 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.333 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.333 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.333 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.333 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.333 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.333 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.333 "name": "Existed_Raid", 00:16:42.333 "uuid": "51a961cb-2514-46f9-8ff1-79127ee4d831", 00:16:42.333 "strip_size_kb": 0, 00:16:42.333 "state": "configuring", 00:16:42.333 "raid_level": "raid1", 00:16:42.333 "superblock": true, 00:16:42.334 "num_base_bdevs": 4, 00:16:42.334 "num_base_bdevs_discovered": 2, 00:16:42.334 "num_base_bdevs_operational": 4, 00:16:42.334 "base_bdevs_list": [ 00:16:42.334 { 00:16:42.334 "name": "BaseBdev1", 00:16:42.334 "uuid": "2336ef8a-77a6-4f71-91cb-5b6b7bdc8d4b", 00:16:42.334 "is_configured": true, 00:16:42.334 "data_offset": 2048, 00:16:42.334 "data_size": 63488 00:16:42.334 }, 00:16:42.334 { 00:16:42.334 "name": null, 00:16:42.334 "uuid": "af9e3575-a78e-47e2-be9d-15c43c03a3b8", 00:16:42.334 "is_configured": false, 00:16:42.334 "data_offset": 0, 00:16:42.334 "data_size": 63488 00:16:42.334 }, 00:16:42.334 { 00:16:42.334 "name": null, 00:16:42.334 "uuid": "3301c685-7df9-42ef-9097-6896a4bce989", 00:16:42.334 "is_configured": false, 00:16:42.334 "data_offset": 0, 00:16:42.334 "data_size": 63488 00:16:42.334 }, 00:16:42.334 { 00:16:42.334 "name": "BaseBdev4", 00:16:42.334 "uuid": "8fced57e-7c88-4e25-9ee0-810736f72ca0", 00:16:42.334 "is_configured": true, 00:16:42.334 "data_offset": 2048, 00:16:42.334 "data_size": 63488 00:16:42.334 } 00:16:42.334 ] 00:16:42.334 }' 00:16:42.334 06:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.334 06:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.900 06:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:42.901 06:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.901 06:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.901 06:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.901 06:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.901 06:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:42.901 06:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:42.901 06:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.901 06:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.901 [2024-12-06 06:43:01.378881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:42.901 06:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.901 06:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:42.901 06:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.901 06:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.901 06:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.901 06:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.901 06:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:42.901 06:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.901 06:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.901 06:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.901 06:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.901 06:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.901 06:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.901 06:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.901 06:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.901 06:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.901 06:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.901 "name": "Existed_Raid", 00:16:42.901 "uuid": "51a961cb-2514-46f9-8ff1-79127ee4d831", 00:16:42.901 "strip_size_kb": 0, 00:16:42.901 "state": "configuring", 00:16:42.901 "raid_level": "raid1", 00:16:42.901 "superblock": true, 00:16:42.901 "num_base_bdevs": 4, 00:16:42.901 "num_base_bdevs_discovered": 3, 00:16:42.901 "num_base_bdevs_operational": 4, 00:16:42.901 "base_bdevs_list": [ 00:16:42.901 { 00:16:42.901 "name": "BaseBdev1", 00:16:42.901 "uuid": "2336ef8a-77a6-4f71-91cb-5b6b7bdc8d4b", 00:16:42.901 "is_configured": true, 00:16:42.901 "data_offset": 2048, 00:16:42.901 "data_size": 63488 00:16:42.901 }, 00:16:42.901 { 00:16:42.901 "name": null, 00:16:42.901 "uuid": "af9e3575-a78e-47e2-be9d-15c43c03a3b8", 00:16:42.901 "is_configured": false, 00:16:42.901 "data_offset": 0, 00:16:42.901 "data_size": 63488 00:16:42.901 }, 00:16:42.901 { 00:16:42.901 "name": "BaseBdev3", 00:16:42.901 "uuid": "3301c685-7df9-42ef-9097-6896a4bce989", 00:16:42.901 "is_configured": true, 00:16:42.901 "data_offset": 2048, 00:16:42.901 "data_size": 63488 00:16:42.901 }, 00:16:42.901 { 00:16:42.901 "name": "BaseBdev4", 00:16:42.901 "uuid": "8fced57e-7c88-4e25-9ee0-810736f72ca0", 00:16:42.901 "is_configured": true, 00:16:42.901 "data_offset": 2048, 00:16:42.901 "data_size": 63488 00:16:42.901 } 00:16:42.901 ] 00:16:42.901 }' 00:16:42.901 06:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.901 06:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.469 06:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:43.469 06:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.469 06:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.469 06:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.469 06:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.469 06:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:43.469 06:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:43.469 06:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.469 06:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.469 [2024-12-06 06:43:01.951149] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:43.469 06:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.469 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:43.469 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.469 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.469 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.469 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.469 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:43.469 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.469 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.469 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.469 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.469 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.469 06:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.469 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.469 06:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.469 06:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.469 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.469 "name": "Existed_Raid", 00:16:43.469 "uuid": "51a961cb-2514-46f9-8ff1-79127ee4d831", 00:16:43.469 "strip_size_kb": 0, 00:16:43.469 "state": "configuring", 00:16:43.469 "raid_level": "raid1", 00:16:43.469 "superblock": true, 00:16:43.469 "num_base_bdevs": 4, 00:16:43.469 "num_base_bdevs_discovered": 2, 00:16:43.469 "num_base_bdevs_operational": 4, 00:16:43.469 "base_bdevs_list": [ 00:16:43.469 { 00:16:43.469 "name": null, 00:16:43.469 "uuid": "2336ef8a-77a6-4f71-91cb-5b6b7bdc8d4b", 00:16:43.469 "is_configured": false, 00:16:43.469 "data_offset": 0, 00:16:43.469 "data_size": 63488 00:16:43.469 }, 00:16:43.469 { 00:16:43.469 "name": null, 00:16:43.469 "uuid": "af9e3575-a78e-47e2-be9d-15c43c03a3b8", 00:16:43.469 "is_configured": false, 00:16:43.469 "data_offset": 0, 00:16:43.469 "data_size": 63488 00:16:43.469 }, 00:16:43.469 { 00:16:43.469 "name": "BaseBdev3", 00:16:43.469 "uuid": "3301c685-7df9-42ef-9097-6896a4bce989", 00:16:43.469 "is_configured": true, 00:16:43.469 "data_offset": 2048, 00:16:43.469 "data_size": 63488 00:16:43.469 }, 00:16:43.469 { 00:16:43.469 "name": "BaseBdev4", 00:16:43.469 "uuid": "8fced57e-7c88-4e25-9ee0-810736f72ca0", 00:16:43.469 "is_configured": true, 00:16:43.469 "data_offset": 2048, 00:16:43.469 "data_size": 63488 00:16:43.469 } 00:16:43.469 ] 00:16:43.469 }' 00:16:43.469 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.469 06:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.037 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.037 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:44.037 06:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.037 06:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.037 06:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.037 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:44.037 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:44.037 06:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.037 06:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.037 [2024-12-06 06:43:02.611408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:44.037 06:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.037 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:44.037 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:44.037 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:44.037 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.037 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.037 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:44.037 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.037 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.037 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.037 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.037 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.037 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.037 06:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.037 06:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.037 06:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.037 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.037 "name": "Existed_Raid", 00:16:44.037 "uuid": "51a961cb-2514-46f9-8ff1-79127ee4d831", 00:16:44.037 "strip_size_kb": 0, 00:16:44.037 "state": "configuring", 00:16:44.037 "raid_level": "raid1", 00:16:44.037 "superblock": true, 00:16:44.037 "num_base_bdevs": 4, 00:16:44.037 "num_base_bdevs_discovered": 3, 00:16:44.037 "num_base_bdevs_operational": 4, 00:16:44.037 "base_bdevs_list": [ 00:16:44.037 { 00:16:44.037 "name": null, 00:16:44.037 "uuid": "2336ef8a-77a6-4f71-91cb-5b6b7bdc8d4b", 00:16:44.037 "is_configured": false, 00:16:44.037 "data_offset": 0, 00:16:44.037 "data_size": 63488 00:16:44.037 }, 00:16:44.037 { 00:16:44.037 "name": "BaseBdev2", 00:16:44.037 "uuid": "af9e3575-a78e-47e2-be9d-15c43c03a3b8", 00:16:44.037 "is_configured": true, 00:16:44.037 "data_offset": 2048, 00:16:44.037 "data_size": 63488 00:16:44.037 }, 00:16:44.037 { 00:16:44.037 "name": "BaseBdev3", 00:16:44.037 "uuid": "3301c685-7df9-42ef-9097-6896a4bce989", 00:16:44.037 "is_configured": true, 00:16:44.037 "data_offset": 2048, 00:16:44.037 "data_size": 63488 00:16:44.037 }, 00:16:44.037 { 00:16:44.037 "name": "BaseBdev4", 00:16:44.037 "uuid": "8fced57e-7c88-4e25-9ee0-810736f72ca0", 00:16:44.037 "is_configured": true, 00:16:44.037 "data_offset": 2048, 00:16:44.037 "data_size": 63488 00:16:44.037 } 00:16:44.037 ] 00:16:44.037 }' 00:16:44.037 06:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.037 06:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.605 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.605 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:44.605 06:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.605 06:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.605 06:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.605 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:44.605 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.605 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:44.605 06:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.605 06:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.605 06:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.864 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2336ef8a-77a6-4f71-91cb-5b6b7bdc8d4b 00:16:44.864 06:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.864 06:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.864 [2024-12-06 06:43:03.310245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:44.864 [2024-12-06 06:43:03.310592] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:44.864 [2024-12-06 06:43:03.310618] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:44.864 NewBaseBdev 00:16:44.864 [2024-12-06 06:43:03.310955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:44.864 [2024-12-06 06:43:03.311161] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:44.864 [2024-12-06 06:43:03.311179] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:44.864 [2024-12-06 06:43:03.311346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.864 06:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.864 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:44.864 06:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:44.864 06:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:44.864 06:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:44.864 06:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:44.864 06:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:44.864 06:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:44.864 06:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.864 06:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.864 06:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.864 06:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:44.864 06:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.865 06:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.865 [ 00:16:44.865 { 00:16:44.865 "name": "NewBaseBdev", 00:16:44.865 "aliases": [ 00:16:44.865 "2336ef8a-77a6-4f71-91cb-5b6b7bdc8d4b" 00:16:44.865 ], 00:16:44.865 "product_name": "Malloc disk", 00:16:44.865 "block_size": 512, 00:16:44.865 "num_blocks": 65536, 00:16:44.865 "uuid": "2336ef8a-77a6-4f71-91cb-5b6b7bdc8d4b", 00:16:44.865 "assigned_rate_limits": { 00:16:44.865 "rw_ios_per_sec": 0, 00:16:44.865 "rw_mbytes_per_sec": 0, 00:16:44.865 "r_mbytes_per_sec": 0, 00:16:44.865 "w_mbytes_per_sec": 0 00:16:44.865 }, 00:16:44.865 "claimed": true, 00:16:44.865 "claim_type": "exclusive_write", 00:16:44.865 "zoned": false, 00:16:44.865 "supported_io_types": { 00:16:44.865 "read": true, 00:16:44.865 "write": true, 00:16:44.865 "unmap": true, 00:16:44.865 "flush": true, 00:16:44.865 "reset": true, 00:16:44.865 "nvme_admin": false, 00:16:44.865 "nvme_io": false, 00:16:44.865 "nvme_io_md": false, 00:16:44.865 "write_zeroes": true, 00:16:44.865 "zcopy": true, 00:16:44.865 "get_zone_info": false, 00:16:44.865 "zone_management": false, 00:16:44.865 "zone_append": false, 00:16:44.865 "compare": false, 00:16:44.865 "compare_and_write": false, 00:16:44.865 "abort": true, 00:16:44.865 "seek_hole": false, 00:16:44.865 "seek_data": false, 00:16:44.865 "copy": true, 00:16:44.865 "nvme_iov_md": false 00:16:44.865 }, 00:16:44.865 "memory_domains": [ 00:16:44.865 { 00:16:44.865 "dma_device_id": "system", 00:16:44.865 "dma_device_type": 1 00:16:44.865 }, 00:16:44.865 { 00:16:44.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.865 "dma_device_type": 2 00:16:44.865 } 00:16:44.865 ], 00:16:44.865 "driver_specific": {} 00:16:44.865 } 00:16:44.865 ] 00:16:44.865 06:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.865 06:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:44.865 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:44.865 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:44.865 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.865 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.865 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.865 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:44.865 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.865 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.865 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.865 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.865 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.865 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.865 06:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.865 06:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.865 06:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.865 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.865 "name": "Existed_Raid", 00:16:44.865 "uuid": "51a961cb-2514-46f9-8ff1-79127ee4d831", 00:16:44.865 "strip_size_kb": 0, 00:16:44.865 "state": "online", 00:16:44.865 "raid_level": "raid1", 00:16:44.865 "superblock": true, 00:16:44.865 "num_base_bdevs": 4, 00:16:44.865 "num_base_bdevs_discovered": 4, 00:16:44.865 "num_base_bdevs_operational": 4, 00:16:44.865 "base_bdevs_list": [ 00:16:44.865 { 00:16:44.865 "name": "NewBaseBdev", 00:16:44.865 "uuid": "2336ef8a-77a6-4f71-91cb-5b6b7bdc8d4b", 00:16:44.865 "is_configured": true, 00:16:44.865 "data_offset": 2048, 00:16:44.865 "data_size": 63488 00:16:44.865 }, 00:16:44.865 { 00:16:44.865 "name": "BaseBdev2", 00:16:44.865 "uuid": "af9e3575-a78e-47e2-be9d-15c43c03a3b8", 00:16:44.865 "is_configured": true, 00:16:44.865 "data_offset": 2048, 00:16:44.865 "data_size": 63488 00:16:44.865 }, 00:16:44.865 { 00:16:44.865 "name": "BaseBdev3", 00:16:44.865 "uuid": "3301c685-7df9-42ef-9097-6896a4bce989", 00:16:44.865 "is_configured": true, 00:16:44.865 "data_offset": 2048, 00:16:44.865 "data_size": 63488 00:16:44.865 }, 00:16:44.865 { 00:16:44.865 "name": "BaseBdev4", 00:16:44.865 "uuid": "8fced57e-7c88-4e25-9ee0-810736f72ca0", 00:16:44.865 "is_configured": true, 00:16:44.865 "data_offset": 2048, 00:16:44.865 "data_size": 63488 00:16:44.865 } 00:16:44.865 ] 00:16:44.865 }' 00:16:44.865 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.865 06:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.435 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:45.435 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:45.435 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:45.435 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:45.435 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:45.435 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:45.435 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:45.435 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:45.435 06:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.435 06:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.435 [2024-12-06 06:43:03.874926] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:45.435 06:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.435 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:45.435 "name": "Existed_Raid", 00:16:45.435 "aliases": [ 00:16:45.435 "51a961cb-2514-46f9-8ff1-79127ee4d831" 00:16:45.435 ], 00:16:45.435 "product_name": "Raid Volume", 00:16:45.435 "block_size": 512, 00:16:45.435 "num_blocks": 63488, 00:16:45.435 "uuid": "51a961cb-2514-46f9-8ff1-79127ee4d831", 00:16:45.435 "assigned_rate_limits": { 00:16:45.435 "rw_ios_per_sec": 0, 00:16:45.435 "rw_mbytes_per_sec": 0, 00:16:45.435 "r_mbytes_per_sec": 0, 00:16:45.435 "w_mbytes_per_sec": 0 00:16:45.435 }, 00:16:45.435 "claimed": false, 00:16:45.435 "zoned": false, 00:16:45.435 "supported_io_types": { 00:16:45.435 "read": true, 00:16:45.435 "write": true, 00:16:45.435 "unmap": false, 00:16:45.435 "flush": false, 00:16:45.435 "reset": true, 00:16:45.435 "nvme_admin": false, 00:16:45.435 "nvme_io": false, 00:16:45.435 "nvme_io_md": false, 00:16:45.435 "write_zeroes": true, 00:16:45.435 "zcopy": false, 00:16:45.435 "get_zone_info": false, 00:16:45.435 "zone_management": false, 00:16:45.435 "zone_append": false, 00:16:45.435 "compare": false, 00:16:45.435 "compare_and_write": false, 00:16:45.435 "abort": false, 00:16:45.435 "seek_hole": false, 00:16:45.435 "seek_data": false, 00:16:45.435 "copy": false, 00:16:45.435 "nvme_iov_md": false 00:16:45.435 }, 00:16:45.435 "memory_domains": [ 00:16:45.435 { 00:16:45.435 "dma_device_id": "system", 00:16:45.435 "dma_device_type": 1 00:16:45.435 }, 00:16:45.435 { 00:16:45.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.435 "dma_device_type": 2 00:16:45.435 }, 00:16:45.435 { 00:16:45.435 "dma_device_id": "system", 00:16:45.435 "dma_device_type": 1 00:16:45.435 }, 00:16:45.435 { 00:16:45.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.435 "dma_device_type": 2 00:16:45.435 }, 00:16:45.435 { 00:16:45.435 "dma_device_id": "system", 00:16:45.435 "dma_device_type": 1 00:16:45.435 }, 00:16:45.435 { 00:16:45.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.435 "dma_device_type": 2 00:16:45.435 }, 00:16:45.435 { 00:16:45.435 "dma_device_id": "system", 00:16:45.435 "dma_device_type": 1 00:16:45.435 }, 00:16:45.435 { 00:16:45.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.435 "dma_device_type": 2 00:16:45.435 } 00:16:45.435 ], 00:16:45.435 "driver_specific": { 00:16:45.435 "raid": { 00:16:45.435 "uuid": "51a961cb-2514-46f9-8ff1-79127ee4d831", 00:16:45.435 "strip_size_kb": 0, 00:16:45.435 "state": "online", 00:16:45.435 "raid_level": "raid1", 00:16:45.435 "superblock": true, 00:16:45.435 "num_base_bdevs": 4, 00:16:45.435 "num_base_bdevs_discovered": 4, 00:16:45.435 "num_base_bdevs_operational": 4, 00:16:45.435 "base_bdevs_list": [ 00:16:45.435 { 00:16:45.435 "name": "NewBaseBdev", 00:16:45.435 "uuid": "2336ef8a-77a6-4f71-91cb-5b6b7bdc8d4b", 00:16:45.435 "is_configured": true, 00:16:45.435 "data_offset": 2048, 00:16:45.435 "data_size": 63488 00:16:45.435 }, 00:16:45.435 { 00:16:45.435 "name": "BaseBdev2", 00:16:45.435 "uuid": "af9e3575-a78e-47e2-be9d-15c43c03a3b8", 00:16:45.435 "is_configured": true, 00:16:45.435 "data_offset": 2048, 00:16:45.435 "data_size": 63488 00:16:45.435 }, 00:16:45.435 { 00:16:45.435 "name": "BaseBdev3", 00:16:45.435 "uuid": "3301c685-7df9-42ef-9097-6896a4bce989", 00:16:45.435 "is_configured": true, 00:16:45.435 "data_offset": 2048, 00:16:45.435 "data_size": 63488 00:16:45.435 }, 00:16:45.435 { 00:16:45.435 "name": "BaseBdev4", 00:16:45.435 "uuid": "8fced57e-7c88-4e25-9ee0-810736f72ca0", 00:16:45.435 "is_configured": true, 00:16:45.435 "data_offset": 2048, 00:16:45.435 "data_size": 63488 00:16:45.435 } 00:16:45.435 ] 00:16:45.435 } 00:16:45.435 } 00:16:45.435 }' 00:16:45.435 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:45.435 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:45.435 BaseBdev2 00:16:45.435 BaseBdev3 00:16:45.435 BaseBdev4' 00:16:45.435 06:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.435 06:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:45.435 06:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:45.435 06:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.435 06:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:45.435 06:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.435 06:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.435 06:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.435 06:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:45.435 06:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:45.435 06:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:45.694 06:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.694 06:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:45.694 06:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.694 06:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.694 06:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.694 06:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:45.694 06:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:45.694 06:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:45.694 06:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:45.694 06:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.694 06:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.694 06:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.694 06:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.694 06:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:45.695 06:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:45.695 06:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:45.695 06:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:45.695 06:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.695 06:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.695 06:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.695 06:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.695 06:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:45.695 06:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:45.695 06:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:45.695 06:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.695 06:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.695 [2024-12-06 06:43:04.238553] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:45.695 [2024-12-06 06:43:04.238713] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:45.695 [2024-12-06 06:43:04.238866] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:45.695 [2024-12-06 06:43:04.239251] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:45.695 [2024-12-06 06:43:04.239274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:45.695 06:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.695 06:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74182 00:16:45.695 06:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74182 ']' 00:16:45.695 06:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74182 00:16:45.695 06:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:45.695 06:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:45.695 06:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74182 00:16:45.695 killing process with pid 74182 00:16:45.695 06:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:45.695 06:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:45.695 06:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74182' 00:16:45.695 06:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74182 00:16:45.695 [2024-12-06 06:43:04.272664] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:45.695 06:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74182 00:16:46.264 [2024-12-06 06:43:04.648304] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:47.201 ************************************ 00:16:47.201 END TEST raid_state_function_test_sb 00:16:47.201 ************************************ 00:16:47.201 06:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:47.201 00:16:47.201 real 0m12.816s 00:16:47.201 user 0m21.245s 00:16:47.201 sys 0m1.711s 00:16:47.201 06:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:47.201 06:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.201 06:43:05 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:16:47.201 06:43:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:47.201 06:43:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:47.201 06:43:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:47.201 ************************************ 00:16:47.201 START TEST raid_superblock_test 00:16:47.201 ************************************ 00:16:47.201 06:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:16:47.201 06:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:47.201 06:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:47.201 06:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:47.201 06:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:47.201 06:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:47.201 06:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:47.201 06:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:47.201 06:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:47.201 06:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:47.201 06:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:47.201 06:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:47.201 06:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:47.201 06:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:47.201 06:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:47.201 06:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:47.201 06:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74868 00:16:47.201 06:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:47.201 06:43:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74868 00:16:47.201 06:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74868 ']' 00:16:47.201 06:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.201 06:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.201 06:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.201 06:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.201 06:43:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.460 [2024-12-06 06:43:05.877786] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:16:47.460 [2024-12-06 06:43:05.877980] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74868 ] 00:16:47.460 [2024-12-06 06:43:06.069270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.720 [2024-12-06 06:43:06.223409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.979 [2024-12-06 06:43:06.442777] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.979 [2024-12-06 06:43:06.443121] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:48.238 06:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:48.238 06:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:48.238 06:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:48.238 06:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:48.238 06:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:48.238 06:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:48.238 06:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:48.238 06:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:48.238 06:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:48.238 06:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:48.238 06:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:48.238 06:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.238 06:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.497 malloc1 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.497 [2024-12-06 06:43:06.900092] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:48.497 [2024-12-06 06:43:06.900294] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.497 [2024-12-06 06:43:06.900371] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:48.497 [2024-12-06 06:43:06.900626] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.497 [2024-12-06 06:43:06.903438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.497 [2024-12-06 06:43:06.903610] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:48.497 pt1 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.497 malloc2 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.497 [2024-12-06 06:43:06.957085] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:48.497 [2024-12-06 06:43:06.957157] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.497 [2024-12-06 06:43:06.957195] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:48.497 [2024-12-06 06:43:06.957210] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.497 [2024-12-06 06:43:06.959926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.497 [2024-12-06 06:43:06.959970] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:48.497 pt2 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.497 06:43:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.497 malloc3 00:16:48.497 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.497 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.498 [2024-12-06 06:43:07.023479] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:48.498 [2024-12-06 06:43:07.023577] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.498 [2024-12-06 06:43:07.023614] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:48.498 [2024-12-06 06:43:07.023631] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.498 [2024-12-06 06:43:07.026596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.498 [2024-12-06 06:43:07.026689] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:48.498 pt3 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.498 malloc4 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.498 [2024-12-06 06:43:07.079939] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:48.498 [2024-12-06 06:43:07.080177] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.498 [2024-12-06 06:43:07.080253] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:48.498 [2024-12-06 06:43:07.080361] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.498 [2024-12-06 06:43:07.083372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.498 [2024-12-06 06:43:07.083575] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:48.498 pt4 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.498 [2024-12-06 06:43:07.092020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:48.498 [2024-12-06 06:43:07.094672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:48.498 [2024-12-06 06:43:07.094783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:48.498 [2024-12-06 06:43:07.094883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:48.498 [2024-12-06 06:43:07.095184] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:48.498 [2024-12-06 06:43:07.095205] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:48.498 [2024-12-06 06:43:07.095524] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:48.498 [2024-12-06 06:43:07.095827] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:48.498 [2024-12-06 06:43:07.095859] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:48.498 [2024-12-06 06:43:07.096117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.498 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.757 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.757 "name": "raid_bdev1", 00:16:48.757 "uuid": "144db9af-b8b9-4056-89ba-af25474ed866", 00:16:48.757 "strip_size_kb": 0, 00:16:48.757 "state": "online", 00:16:48.757 "raid_level": "raid1", 00:16:48.757 "superblock": true, 00:16:48.757 "num_base_bdevs": 4, 00:16:48.757 "num_base_bdevs_discovered": 4, 00:16:48.757 "num_base_bdevs_operational": 4, 00:16:48.757 "base_bdevs_list": [ 00:16:48.757 { 00:16:48.757 "name": "pt1", 00:16:48.757 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:48.757 "is_configured": true, 00:16:48.757 "data_offset": 2048, 00:16:48.757 "data_size": 63488 00:16:48.757 }, 00:16:48.757 { 00:16:48.757 "name": "pt2", 00:16:48.757 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:48.757 "is_configured": true, 00:16:48.757 "data_offset": 2048, 00:16:48.757 "data_size": 63488 00:16:48.757 }, 00:16:48.757 { 00:16:48.757 "name": "pt3", 00:16:48.757 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:48.757 "is_configured": true, 00:16:48.757 "data_offset": 2048, 00:16:48.757 "data_size": 63488 00:16:48.757 }, 00:16:48.757 { 00:16:48.757 "name": "pt4", 00:16:48.757 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:48.757 "is_configured": true, 00:16:48.757 "data_offset": 2048, 00:16:48.757 "data_size": 63488 00:16:48.757 } 00:16:48.757 ] 00:16:48.757 }' 00:16:48.757 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.757 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.016 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:49.016 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:49.016 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:49.016 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:49.016 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:49.016 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:49.016 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:49.016 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:49.016 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.016 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.016 [2024-12-06 06:43:07.612656] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:49.016 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.016 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:49.016 "name": "raid_bdev1", 00:16:49.016 "aliases": [ 00:16:49.016 "144db9af-b8b9-4056-89ba-af25474ed866" 00:16:49.016 ], 00:16:49.016 "product_name": "Raid Volume", 00:16:49.016 "block_size": 512, 00:16:49.016 "num_blocks": 63488, 00:16:49.016 "uuid": "144db9af-b8b9-4056-89ba-af25474ed866", 00:16:49.016 "assigned_rate_limits": { 00:16:49.016 "rw_ios_per_sec": 0, 00:16:49.016 "rw_mbytes_per_sec": 0, 00:16:49.016 "r_mbytes_per_sec": 0, 00:16:49.016 "w_mbytes_per_sec": 0 00:16:49.016 }, 00:16:49.016 "claimed": false, 00:16:49.016 "zoned": false, 00:16:49.016 "supported_io_types": { 00:16:49.016 "read": true, 00:16:49.016 "write": true, 00:16:49.016 "unmap": false, 00:16:49.016 "flush": false, 00:16:49.016 "reset": true, 00:16:49.016 "nvme_admin": false, 00:16:49.016 "nvme_io": false, 00:16:49.016 "nvme_io_md": false, 00:16:49.016 "write_zeroes": true, 00:16:49.016 "zcopy": false, 00:16:49.016 "get_zone_info": false, 00:16:49.016 "zone_management": false, 00:16:49.016 "zone_append": false, 00:16:49.016 "compare": false, 00:16:49.016 "compare_and_write": false, 00:16:49.016 "abort": false, 00:16:49.016 "seek_hole": false, 00:16:49.016 "seek_data": false, 00:16:49.016 "copy": false, 00:16:49.016 "nvme_iov_md": false 00:16:49.016 }, 00:16:49.016 "memory_domains": [ 00:16:49.016 { 00:16:49.016 "dma_device_id": "system", 00:16:49.016 "dma_device_type": 1 00:16:49.016 }, 00:16:49.016 { 00:16:49.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.016 "dma_device_type": 2 00:16:49.016 }, 00:16:49.016 { 00:16:49.016 "dma_device_id": "system", 00:16:49.016 "dma_device_type": 1 00:16:49.016 }, 00:16:49.016 { 00:16:49.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.016 "dma_device_type": 2 00:16:49.016 }, 00:16:49.016 { 00:16:49.016 "dma_device_id": "system", 00:16:49.016 "dma_device_type": 1 00:16:49.016 }, 00:16:49.016 { 00:16:49.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.016 "dma_device_type": 2 00:16:49.016 }, 00:16:49.016 { 00:16:49.016 "dma_device_id": "system", 00:16:49.016 "dma_device_type": 1 00:16:49.016 }, 00:16:49.016 { 00:16:49.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.016 "dma_device_type": 2 00:16:49.016 } 00:16:49.016 ], 00:16:49.016 "driver_specific": { 00:16:49.016 "raid": { 00:16:49.016 "uuid": "144db9af-b8b9-4056-89ba-af25474ed866", 00:16:49.016 "strip_size_kb": 0, 00:16:49.016 "state": "online", 00:16:49.016 "raid_level": "raid1", 00:16:49.016 "superblock": true, 00:16:49.016 "num_base_bdevs": 4, 00:16:49.016 "num_base_bdevs_discovered": 4, 00:16:49.016 "num_base_bdevs_operational": 4, 00:16:49.016 "base_bdevs_list": [ 00:16:49.016 { 00:16:49.016 "name": "pt1", 00:16:49.016 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:49.016 "is_configured": true, 00:16:49.016 "data_offset": 2048, 00:16:49.016 "data_size": 63488 00:16:49.016 }, 00:16:49.016 { 00:16:49.016 "name": "pt2", 00:16:49.016 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:49.016 "is_configured": true, 00:16:49.016 "data_offset": 2048, 00:16:49.016 "data_size": 63488 00:16:49.016 }, 00:16:49.016 { 00:16:49.016 "name": "pt3", 00:16:49.016 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:49.016 "is_configured": true, 00:16:49.016 "data_offset": 2048, 00:16:49.016 "data_size": 63488 00:16:49.016 }, 00:16:49.016 { 00:16:49.016 "name": "pt4", 00:16:49.016 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:49.016 "is_configured": true, 00:16:49.016 "data_offset": 2048, 00:16:49.016 "data_size": 63488 00:16:49.016 } 00:16:49.016 ] 00:16:49.016 } 00:16:49.016 } 00:16:49.016 }' 00:16:49.016 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:49.275 pt2 00:16:49.275 pt3 00:16:49.275 pt4' 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.275 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.533 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.533 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:49.533 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:49.533 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:49.533 06:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:49.533 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.534 06:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.534 [2024-12-06 06:43:07.976757] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=144db9af-b8b9-4056-89ba-af25474ed866 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 144db9af-b8b9-4056-89ba-af25474ed866 ']' 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.534 [2024-12-06 06:43:08.024339] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:49.534 [2024-12-06 06:43:08.024370] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:49.534 [2024-12-06 06:43:08.024476] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:49.534 [2024-12-06 06:43:08.024623] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:49.534 [2024-12-06 06:43:08.024649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.534 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.534 [2024-12-06 06:43:08.176423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:49.792 [2024-12-06 06:43:08.178931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:49.793 [2024-12-06 06:43:08.179002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:49.793 [2024-12-06 06:43:08.179059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:49.793 [2024-12-06 06:43:08.179131] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:49.793 [2024-12-06 06:43:08.179208] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:49.793 [2024-12-06 06:43:08.179241] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:49.793 [2024-12-06 06:43:08.179271] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:49.793 [2024-12-06 06:43:08.179292] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:49.793 [2024-12-06 06:43:08.179317] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:49.793 request: 00:16:49.793 { 00:16:49.793 "name": "raid_bdev1", 00:16:49.793 "raid_level": "raid1", 00:16:49.793 "base_bdevs": [ 00:16:49.793 "malloc1", 00:16:49.793 "malloc2", 00:16:49.793 "malloc3", 00:16:49.793 "malloc4" 00:16:49.793 ], 00:16:49.793 "superblock": false, 00:16:49.793 "method": "bdev_raid_create", 00:16:49.793 "req_id": 1 00:16:49.793 } 00:16:49.793 Got JSON-RPC error response 00:16:49.793 response: 00:16:49.793 { 00:16:49.793 "code": -17, 00:16:49.793 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:49.793 } 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.793 [2024-12-06 06:43:08.244414] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:49.793 [2024-12-06 06:43:08.244681] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.793 [2024-12-06 06:43:08.244755] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:49.793 [2024-12-06 06:43:08.244937] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.793 [2024-12-06 06:43:08.248308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.793 [2024-12-06 06:43:08.248509] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:49.793 [2024-12-06 06:43:08.248785] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:49.793 [2024-12-06 06:43:08.249028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:49.793 pt1 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.793 "name": "raid_bdev1", 00:16:49.793 "uuid": "144db9af-b8b9-4056-89ba-af25474ed866", 00:16:49.793 "strip_size_kb": 0, 00:16:49.793 "state": "configuring", 00:16:49.793 "raid_level": "raid1", 00:16:49.793 "superblock": true, 00:16:49.793 "num_base_bdevs": 4, 00:16:49.793 "num_base_bdevs_discovered": 1, 00:16:49.793 "num_base_bdevs_operational": 4, 00:16:49.793 "base_bdevs_list": [ 00:16:49.793 { 00:16:49.793 "name": "pt1", 00:16:49.793 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:49.793 "is_configured": true, 00:16:49.793 "data_offset": 2048, 00:16:49.793 "data_size": 63488 00:16:49.793 }, 00:16:49.793 { 00:16:49.793 "name": null, 00:16:49.793 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:49.793 "is_configured": false, 00:16:49.793 "data_offset": 2048, 00:16:49.793 "data_size": 63488 00:16:49.793 }, 00:16:49.793 { 00:16:49.793 "name": null, 00:16:49.793 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:49.793 "is_configured": false, 00:16:49.793 "data_offset": 2048, 00:16:49.793 "data_size": 63488 00:16:49.793 }, 00:16:49.793 { 00:16:49.793 "name": null, 00:16:49.793 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:49.793 "is_configured": false, 00:16:49.793 "data_offset": 2048, 00:16:49.793 "data_size": 63488 00:16:49.793 } 00:16:49.793 ] 00:16:49.793 }' 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.793 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.360 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:50.360 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:50.360 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.360 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.360 [2024-12-06 06:43:08.793073] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:50.360 [2024-12-06 06:43:08.793178] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.360 [2024-12-06 06:43:08.793211] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:50.360 [2024-12-06 06:43:08.793229] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.360 [2024-12-06 06:43:08.793821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.360 [2024-12-06 06:43:08.793875] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:50.360 [2024-12-06 06:43:08.793980] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:50.360 [2024-12-06 06:43:08.794017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:50.360 pt2 00:16:50.360 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.360 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:50.360 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.360 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.360 [2024-12-06 06:43:08.801052] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:50.360 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.360 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:16:50.360 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.360 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.360 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.360 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.360 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.360 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.360 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.360 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.360 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.360 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.360 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.360 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.360 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.360 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.360 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.360 "name": "raid_bdev1", 00:16:50.360 "uuid": "144db9af-b8b9-4056-89ba-af25474ed866", 00:16:50.360 "strip_size_kb": 0, 00:16:50.360 "state": "configuring", 00:16:50.360 "raid_level": "raid1", 00:16:50.360 "superblock": true, 00:16:50.360 "num_base_bdevs": 4, 00:16:50.360 "num_base_bdevs_discovered": 1, 00:16:50.360 "num_base_bdevs_operational": 4, 00:16:50.360 "base_bdevs_list": [ 00:16:50.360 { 00:16:50.360 "name": "pt1", 00:16:50.360 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:50.360 "is_configured": true, 00:16:50.360 "data_offset": 2048, 00:16:50.360 "data_size": 63488 00:16:50.360 }, 00:16:50.360 { 00:16:50.360 "name": null, 00:16:50.360 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.360 "is_configured": false, 00:16:50.360 "data_offset": 0, 00:16:50.360 "data_size": 63488 00:16:50.360 }, 00:16:50.360 { 00:16:50.360 "name": null, 00:16:50.360 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:50.360 "is_configured": false, 00:16:50.360 "data_offset": 2048, 00:16:50.360 "data_size": 63488 00:16:50.360 }, 00:16:50.360 { 00:16:50.360 "name": null, 00:16:50.360 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:50.360 "is_configured": false, 00:16:50.360 "data_offset": 2048, 00:16:50.360 "data_size": 63488 00:16:50.360 } 00:16:50.360 ] 00:16:50.360 }' 00:16:50.360 06:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.360 06:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.928 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:50.928 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.929 [2024-12-06 06:43:09.345260] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:50.929 [2024-12-06 06:43:09.345367] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.929 [2024-12-06 06:43:09.345413] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:50.929 [2024-12-06 06:43:09.345443] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.929 [2024-12-06 06:43:09.346127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.929 [2024-12-06 06:43:09.346162] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:50.929 [2024-12-06 06:43:09.346271] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:50.929 [2024-12-06 06:43:09.346304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:50.929 pt2 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.929 [2024-12-06 06:43:09.353157] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:50.929 [2024-12-06 06:43:09.353219] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.929 [2024-12-06 06:43:09.353247] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:50.929 [2024-12-06 06:43:09.353261] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.929 [2024-12-06 06:43:09.353785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.929 [2024-12-06 06:43:09.353822] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:50.929 [2024-12-06 06:43:09.353923] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:50.929 [2024-12-06 06:43:09.353952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:50.929 pt3 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.929 [2024-12-06 06:43:09.361131] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:50.929 [2024-12-06 06:43:09.361187] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.929 [2024-12-06 06:43:09.361216] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:50.929 [2024-12-06 06:43:09.361230] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.929 [2024-12-06 06:43:09.361731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.929 [2024-12-06 06:43:09.361779] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:50.929 [2024-12-06 06:43:09.361869] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:50.929 [2024-12-06 06:43:09.361906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:50.929 [2024-12-06 06:43:09.362087] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:50.929 [2024-12-06 06:43:09.362109] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:50.929 [2024-12-06 06:43:09.362424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:50.929 [2024-12-06 06:43:09.362657] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:50.929 [2024-12-06 06:43:09.362679] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:50.929 [2024-12-06 06:43:09.362842] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.929 pt4 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.929 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.929 "name": "raid_bdev1", 00:16:50.929 "uuid": "144db9af-b8b9-4056-89ba-af25474ed866", 00:16:50.929 "strip_size_kb": 0, 00:16:50.929 "state": "online", 00:16:50.929 "raid_level": "raid1", 00:16:50.929 "superblock": true, 00:16:50.929 "num_base_bdevs": 4, 00:16:50.929 "num_base_bdevs_discovered": 4, 00:16:50.929 "num_base_bdevs_operational": 4, 00:16:50.929 "base_bdevs_list": [ 00:16:50.929 { 00:16:50.929 "name": "pt1", 00:16:50.929 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:50.929 "is_configured": true, 00:16:50.929 "data_offset": 2048, 00:16:50.929 "data_size": 63488 00:16:50.930 }, 00:16:50.930 { 00:16:50.930 "name": "pt2", 00:16:50.930 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.930 "is_configured": true, 00:16:50.930 "data_offset": 2048, 00:16:50.930 "data_size": 63488 00:16:50.930 }, 00:16:50.930 { 00:16:50.930 "name": "pt3", 00:16:50.930 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:50.930 "is_configured": true, 00:16:50.930 "data_offset": 2048, 00:16:50.930 "data_size": 63488 00:16:50.930 }, 00:16:50.930 { 00:16:50.930 "name": "pt4", 00:16:50.930 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:50.930 "is_configured": true, 00:16:50.930 "data_offset": 2048, 00:16:50.930 "data_size": 63488 00:16:50.930 } 00:16:50.930 ] 00:16:50.930 }' 00:16:50.930 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.930 06:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.498 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:51.498 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:51.498 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:51.498 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:51.498 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:51.498 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:51.498 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:51.498 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:51.498 06:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.498 06:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.498 [2024-12-06 06:43:09.878346] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:51.498 06:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.498 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:51.498 "name": "raid_bdev1", 00:16:51.498 "aliases": [ 00:16:51.498 "144db9af-b8b9-4056-89ba-af25474ed866" 00:16:51.498 ], 00:16:51.498 "product_name": "Raid Volume", 00:16:51.498 "block_size": 512, 00:16:51.498 "num_blocks": 63488, 00:16:51.498 "uuid": "144db9af-b8b9-4056-89ba-af25474ed866", 00:16:51.498 "assigned_rate_limits": { 00:16:51.498 "rw_ios_per_sec": 0, 00:16:51.498 "rw_mbytes_per_sec": 0, 00:16:51.498 "r_mbytes_per_sec": 0, 00:16:51.498 "w_mbytes_per_sec": 0 00:16:51.498 }, 00:16:51.498 "claimed": false, 00:16:51.498 "zoned": false, 00:16:51.498 "supported_io_types": { 00:16:51.498 "read": true, 00:16:51.498 "write": true, 00:16:51.498 "unmap": false, 00:16:51.498 "flush": false, 00:16:51.498 "reset": true, 00:16:51.498 "nvme_admin": false, 00:16:51.498 "nvme_io": false, 00:16:51.498 "nvme_io_md": false, 00:16:51.498 "write_zeroes": true, 00:16:51.498 "zcopy": false, 00:16:51.498 "get_zone_info": false, 00:16:51.498 "zone_management": false, 00:16:51.498 "zone_append": false, 00:16:51.498 "compare": false, 00:16:51.498 "compare_and_write": false, 00:16:51.498 "abort": false, 00:16:51.498 "seek_hole": false, 00:16:51.498 "seek_data": false, 00:16:51.498 "copy": false, 00:16:51.498 "nvme_iov_md": false 00:16:51.498 }, 00:16:51.498 "memory_domains": [ 00:16:51.498 { 00:16:51.498 "dma_device_id": "system", 00:16:51.498 "dma_device_type": 1 00:16:51.498 }, 00:16:51.498 { 00:16:51.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.498 "dma_device_type": 2 00:16:51.498 }, 00:16:51.498 { 00:16:51.498 "dma_device_id": "system", 00:16:51.498 "dma_device_type": 1 00:16:51.498 }, 00:16:51.498 { 00:16:51.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.498 "dma_device_type": 2 00:16:51.498 }, 00:16:51.498 { 00:16:51.498 "dma_device_id": "system", 00:16:51.498 "dma_device_type": 1 00:16:51.498 }, 00:16:51.498 { 00:16:51.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.498 "dma_device_type": 2 00:16:51.498 }, 00:16:51.498 { 00:16:51.498 "dma_device_id": "system", 00:16:51.498 "dma_device_type": 1 00:16:51.498 }, 00:16:51.498 { 00:16:51.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.499 "dma_device_type": 2 00:16:51.499 } 00:16:51.499 ], 00:16:51.499 "driver_specific": { 00:16:51.499 "raid": { 00:16:51.499 "uuid": "144db9af-b8b9-4056-89ba-af25474ed866", 00:16:51.499 "strip_size_kb": 0, 00:16:51.499 "state": "online", 00:16:51.499 "raid_level": "raid1", 00:16:51.499 "superblock": true, 00:16:51.499 "num_base_bdevs": 4, 00:16:51.499 "num_base_bdevs_discovered": 4, 00:16:51.499 "num_base_bdevs_operational": 4, 00:16:51.499 "base_bdevs_list": [ 00:16:51.499 { 00:16:51.499 "name": "pt1", 00:16:51.499 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:51.499 "is_configured": true, 00:16:51.499 "data_offset": 2048, 00:16:51.499 "data_size": 63488 00:16:51.499 }, 00:16:51.499 { 00:16:51.499 "name": "pt2", 00:16:51.499 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:51.499 "is_configured": true, 00:16:51.499 "data_offset": 2048, 00:16:51.499 "data_size": 63488 00:16:51.499 }, 00:16:51.499 { 00:16:51.499 "name": "pt3", 00:16:51.499 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:51.499 "is_configured": true, 00:16:51.499 "data_offset": 2048, 00:16:51.499 "data_size": 63488 00:16:51.499 }, 00:16:51.499 { 00:16:51.499 "name": "pt4", 00:16:51.499 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:51.499 "is_configured": true, 00:16:51.499 "data_offset": 2048, 00:16:51.499 "data_size": 63488 00:16:51.499 } 00:16:51.499 ] 00:16:51.499 } 00:16:51.499 } 00:16:51.499 }' 00:16:51.499 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:51.499 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:51.499 pt2 00:16:51.499 pt3 00:16:51.499 pt4' 00:16:51.499 06:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.499 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:51.499 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.499 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:51.499 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.499 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.499 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.499 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.499 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.499 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.499 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.499 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:51.499 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.499 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.499 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.499 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.758 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.758 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.758 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.758 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:51.758 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.758 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.758 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.758 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.758 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.758 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.758 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.758 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:51.758 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.758 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.758 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.758 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.758 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.758 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.758 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:51.758 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.758 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.758 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:51.758 [2024-12-06 06:43:10.238637] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:51.758 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.758 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 144db9af-b8b9-4056-89ba-af25474ed866 '!=' 144db9af-b8b9-4056-89ba-af25474ed866 ']' 00:16:51.758 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:51.758 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:51.758 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:51.758 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:51.758 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.758 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.758 [2024-12-06 06:43:10.290202] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:51.759 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.759 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:51.759 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.759 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.759 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:51.759 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:51.759 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:51.759 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.759 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.759 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.759 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.759 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.759 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.759 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.759 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.759 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.759 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.759 "name": "raid_bdev1", 00:16:51.759 "uuid": "144db9af-b8b9-4056-89ba-af25474ed866", 00:16:51.759 "strip_size_kb": 0, 00:16:51.759 "state": "online", 00:16:51.759 "raid_level": "raid1", 00:16:51.759 "superblock": true, 00:16:51.759 "num_base_bdevs": 4, 00:16:51.759 "num_base_bdevs_discovered": 3, 00:16:51.759 "num_base_bdevs_operational": 3, 00:16:51.759 "base_bdevs_list": [ 00:16:51.759 { 00:16:51.759 "name": null, 00:16:51.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.759 "is_configured": false, 00:16:51.759 "data_offset": 0, 00:16:51.759 "data_size": 63488 00:16:51.759 }, 00:16:51.759 { 00:16:51.759 "name": "pt2", 00:16:51.759 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:51.759 "is_configured": true, 00:16:51.759 "data_offset": 2048, 00:16:51.759 "data_size": 63488 00:16:51.759 }, 00:16:51.759 { 00:16:51.759 "name": "pt3", 00:16:51.759 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:51.759 "is_configured": true, 00:16:51.759 "data_offset": 2048, 00:16:51.759 "data_size": 63488 00:16:51.759 }, 00:16:51.759 { 00:16:51.759 "name": "pt4", 00:16:51.759 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:51.759 "is_configured": true, 00:16:51.759 "data_offset": 2048, 00:16:51.759 "data_size": 63488 00:16:51.759 } 00:16:51.759 ] 00:16:51.759 }' 00:16:51.759 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.759 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.327 [2024-12-06 06:43:10.778185] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:52.327 [2024-12-06 06:43:10.778228] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:52.327 [2024-12-06 06:43:10.778329] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:52.327 [2024-12-06 06:43:10.778431] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:52.327 [2024-12-06 06:43:10.778447] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.327 [2024-12-06 06:43:10.862180] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:52.327 [2024-12-06 06:43:10.862246] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.327 [2024-12-06 06:43:10.862275] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:52.327 [2024-12-06 06:43:10.862290] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.327 [2024-12-06 06:43:10.865154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.327 [2024-12-06 06:43:10.865198] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:52.327 [2024-12-06 06:43:10.865302] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:52.327 [2024-12-06 06:43:10.865362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:52.327 pt2 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.327 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.327 "name": "raid_bdev1", 00:16:52.327 "uuid": "144db9af-b8b9-4056-89ba-af25474ed866", 00:16:52.327 "strip_size_kb": 0, 00:16:52.327 "state": "configuring", 00:16:52.327 "raid_level": "raid1", 00:16:52.327 "superblock": true, 00:16:52.327 "num_base_bdevs": 4, 00:16:52.328 "num_base_bdevs_discovered": 1, 00:16:52.328 "num_base_bdevs_operational": 3, 00:16:52.328 "base_bdevs_list": [ 00:16:52.328 { 00:16:52.328 "name": null, 00:16:52.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.328 "is_configured": false, 00:16:52.328 "data_offset": 2048, 00:16:52.328 "data_size": 63488 00:16:52.328 }, 00:16:52.328 { 00:16:52.328 "name": "pt2", 00:16:52.328 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:52.328 "is_configured": true, 00:16:52.328 "data_offset": 2048, 00:16:52.328 "data_size": 63488 00:16:52.328 }, 00:16:52.328 { 00:16:52.328 "name": null, 00:16:52.328 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:52.328 "is_configured": false, 00:16:52.328 "data_offset": 2048, 00:16:52.328 "data_size": 63488 00:16:52.328 }, 00:16:52.328 { 00:16:52.328 "name": null, 00:16:52.328 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:52.328 "is_configured": false, 00:16:52.328 "data_offset": 2048, 00:16:52.328 "data_size": 63488 00:16:52.328 } 00:16:52.328 ] 00:16:52.328 }' 00:16:52.328 06:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.328 06:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.895 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:52.895 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:52.896 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:52.896 06:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.896 06:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.896 [2024-12-06 06:43:11.402374] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:52.896 [2024-12-06 06:43:11.402452] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.896 [2024-12-06 06:43:11.402486] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:52.896 [2024-12-06 06:43:11.402502] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.896 [2024-12-06 06:43:11.403078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.896 [2024-12-06 06:43:11.403104] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:52.896 [2024-12-06 06:43:11.403213] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:52.896 [2024-12-06 06:43:11.403254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:52.896 pt3 00:16:52.896 06:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.896 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:52.896 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.896 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.896 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.896 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.896 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:52.896 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.896 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.896 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.896 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.896 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.896 06:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.896 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.896 06:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.896 06:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.896 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.896 "name": "raid_bdev1", 00:16:52.896 "uuid": "144db9af-b8b9-4056-89ba-af25474ed866", 00:16:52.896 "strip_size_kb": 0, 00:16:52.896 "state": "configuring", 00:16:52.896 "raid_level": "raid1", 00:16:52.896 "superblock": true, 00:16:52.896 "num_base_bdevs": 4, 00:16:52.896 "num_base_bdevs_discovered": 2, 00:16:52.896 "num_base_bdevs_operational": 3, 00:16:52.896 "base_bdevs_list": [ 00:16:52.896 { 00:16:52.896 "name": null, 00:16:52.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.896 "is_configured": false, 00:16:52.896 "data_offset": 2048, 00:16:52.896 "data_size": 63488 00:16:52.896 }, 00:16:52.896 { 00:16:52.896 "name": "pt2", 00:16:52.896 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:52.896 "is_configured": true, 00:16:52.896 "data_offset": 2048, 00:16:52.896 "data_size": 63488 00:16:52.896 }, 00:16:52.896 { 00:16:52.896 "name": "pt3", 00:16:52.896 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:52.896 "is_configured": true, 00:16:52.896 "data_offset": 2048, 00:16:52.896 "data_size": 63488 00:16:52.896 }, 00:16:52.896 { 00:16:52.896 "name": null, 00:16:52.896 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:52.896 "is_configured": false, 00:16:52.896 "data_offset": 2048, 00:16:52.896 "data_size": 63488 00:16:52.896 } 00:16:52.896 ] 00:16:52.896 }' 00:16:52.896 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.896 06:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.462 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:53.462 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:53.462 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:53.462 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:53.462 06:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.462 06:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.462 [2024-12-06 06:43:11.894554] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:53.462 [2024-12-06 06:43:11.894645] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.462 [2024-12-06 06:43:11.894685] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:53.462 [2024-12-06 06:43:11.894700] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.462 [2024-12-06 06:43:11.895254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.462 [2024-12-06 06:43:11.895292] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:53.462 [2024-12-06 06:43:11.895402] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:53.462 [2024-12-06 06:43:11.895434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:53.462 [2024-12-06 06:43:11.895614] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:53.462 [2024-12-06 06:43:11.895631] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:53.462 [2024-12-06 06:43:11.895938] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:53.462 [2024-12-06 06:43:11.896129] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:53.462 [2024-12-06 06:43:11.896151] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:53.462 [2024-12-06 06:43:11.896320] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.462 pt4 00:16:53.462 06:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.462 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:53.462 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.462 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.462 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:53.462 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:53.462 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:53.462 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.462 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.462 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.462 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.462 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.462 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.462 06:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.463 06:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.463 06:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.463 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.463 "name": "raid_bdev1", 00:16:53.463 "uuid": "144db9af-b8b9-4056-89ba-af25474ed866", 00:16:53.463 "strip_size_kb": 0, 00:16:53.463 "state": "online", 00:16:53.463 "raid_level": "raid1", 00:16:53.463 "superblock": true, 00:16:53.463 "num_base_bdevs": 4, 00:16:53.463 "num_base_bdevs_discovered": 3, 00:16:53.463 "num_base_bdevs_operational": 3, 00:16:53.463 "base_bdevs_list": [ 00:16:53.463 { 00:16:53.463 "name": null, 00:16:53.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.463 "is_configured": false, 00:16:53.463 "data_offset": 2048, 00:16:53.463 "data_size": 63488 00:16:53.463 }, 00:16:53.463 { 00:16:53.463 "name": "pt2", 00:16:53.463 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:53.463 "is_configured": true, 00:16:53.463 "data_offset": 2048, 00:16:53.463 "data_size": 63488 00:16:53.463 }, 00:16:53.463 { 00:16:53.463 "name": "pt3", 00:16:53.463 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:53.463 "is_configured": true, 00:16:53.463 "data_offset": 2048, 00:16:53.463 "data_size": 63488 00:16:53.463 }, 00:16:53.463 { 00:16:53.463 "name": "pt4", 00:16:53.463 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:53.463 "is_configured": true, 00:16:53.463 "data_offset": 2048, 00:16:53.463 "data_size": 63488 00:16:53.463 } 00:16:53.463 ] 00:16:53.463 }' 00:16:53.463 06:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.463 06:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.053 [2024-12-06 06:43:12.386623] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:54.053 [2024-12-06 06:43:12.386660] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:54.053 [2024-12-06 06:43:12.386760] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.053 [2024-12-06 06:43:12.386855] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:54.053 [2024-12-06 06:43:12.386875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.053 [2024-12-06 06:43:12.458622] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:54.053 [2024-12-06 06:43:12.458699] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.053 [2024-12-06 06:43:12.458726] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:54.053 [2024-12-06 06:43:12.458745] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.053 [2024-12-06 06:43:12.461602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.053 [2024-12-06 06:43:12.461648] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:54.053 [2024-12-06 06:43:12.461751] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:54.053 [2024-12-06 06:43:12.461826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:54.053 [2024-12-06 06:43:12.462006] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:54.053 [2024-12-06 06:43:12.462029] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:54.053 [2024-12-06 06:43:12.462050] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:54.053 [2024-12-06 06:43:12.462124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:54.053 [2024-12-06 06:43:12.462265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:54.053 pt1 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.053 "name": "raid_bdev1", 00:16:54.053 "uuid": "144db9af-b8b9-4056-89ba-af25474ed866", 00:16:54.053 "strip_size_kb": 0, 00:16:54.053 "state": "configuring", 00:16:54.053 "raid_level": "raid1", 00:16:54.053 "superblock": true, 00:16:54.053 "num_base_bdevs": 4, 00:16:54.053 "num_base_bdevs_discovered": 2, 00:16:54.053 "num_base_bdevs_operational": 3, 00:16:54.053 "base_bdevs_list": [ 00:16:54.053 { 00:16:54.053 "name": null, 00:16:54.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.053 "is_configured": false, 00:16:54.053 "data_offset": 2048, 00:16:54.053 "data_size": 63488 00:16:54.053 }, 00:16:54.053 { 00:16:54.053 "name": "pt2", 00:16:54.053 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:54.053 "is_configured": true, 00:16:54.053 "data_offset": 2048, 00:16:54.053 "data_size": 63488 00:16:54.053 }, 00:16:54.053 { 00:16:54.053 "name": "pt3", 00:16:54.053 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:54.053 "is_configured": true, 00:16:54.053 "data_offset": 2048, 00:16:54.053 "data_size": 63488 00:16:54.053 }, 00:16:54.053 { 00:16:54.053 "name": null, 00:16:54.053 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:54.053 "is_configured": false, 00:16:54.053 "data_offset": 2048, 00:16:54.053 "data_size": 63488 00:16:54.053 } 00:16:54.053 ] 00:16:54.053 }' 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.053 06:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.647 06:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:54.647 06:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:54.647 06:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.647 06:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.647 06:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.647 06:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:54.647 06:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:54.647 06:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.647 06:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.647 [2024-12-06 06:43:13.046796] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:54.647 [2024-12-06 06:43:13.046874] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.647 [2024-12-06 06:43:13.046907] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:54.647 [2024-12-06 06:43:13.046922] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.647 [2024-12-06 06:43:13.047474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.647 [2024-12-06 06:43:13.047500] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:54.647 [2024-12-06 06:43:13.047623] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:54.647 [2024-12-06 06:43:13.047657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:54.647 [2024-12-06 06:43:13.047820] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:54.647 [2024-12-06 06:43:13.047835] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:54.647 [2024-12-06 06:43:13.048145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:54.647 [2024-12-06 06:43:13.048324] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:54.647 [2024-12-06 06:43:13.048344] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:54.647 [2024-12-06 06:43:13.048538] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.647 pt4 00:16:54.647 06:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.647 06:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:54.647 06:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.647 06:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.647 06:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.647 06:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.647 06:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:54.647 06:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.647 06:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.647 06:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.647 06:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.647 06:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.647 06:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.647 06:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.647 06:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.647 06:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.647 06:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.647 "name": "raid_bdev1", 00:16:54.647 "uuid": "144db9af-b8b9-4056-89ba-af25474ed866", 00:16:54.647 "strip_size_kb": 0, 00:16:54.647 "state": "online", 00:16:54.647 "raid_level": "raid1", 00:16:54.647 "superblock": true, 00:16:54.647 "num_base_bdevs": 4, 00:16:54.647 "num_base_bdevs_discovered": 3, 00:16:54.647 "num_base_bdevs_operational": 3, 00:16:54.647 "base_bdevs_list": [ 00:16:54.647 { 00:16:54.647 "name": null, 00:16:54.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.647 "is_configured": false, 00:16:54.647 "data_offset": 2048, 00:16:54.647 "data_size": 63488 00:16:54.647 }, 00:16:54.647 { 00:16:54.647 "name": "pt2", 00:16:54.647 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:54.647 "is_configured": true, 00:16:54.647 "data_offset": 2048, 00:16:54.647 "data_size": 63488 00:16:54.647 }, 00:16:54.647 { 00:16:54.647 "name": "pt3", 00:16:54.647 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:54.647 "is_configured": true, 00:16:54.647 "data_offset": 2048, 00:16:54.647 "data_size": 63488 00:16:54.647 }, 00:16:54.647 { 00:16:54.647 "name": "pt4", 00:16:54.647 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:54.647 "is_configured": true, 00:16:54.647 "data_offset": 2048, 00:16:54.647 "data_size": 63488 00:16:54.647 } 00:16:54.647 ] 00:16:54.647 }' 00:16:54.647 06:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.647 06:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.213 06:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:55.213 06:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:55.213 06:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.213 06:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.213 06:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.213 06:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:55.213 06:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:55.213 06:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.213 06:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.213 06:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:55.213 [2024-12-06 06:43:13.627299] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:55.213 06:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.213 06:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 144db9af-b8b9-4056-89ba-af25474ed866 '!=' 144db9af-b8b9-4056-89ba-af25474ed866 ']' 00:16:55.213 06:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74868 00:16:55.213 06:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74868 ']' 00:16:55.213 06:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74868 00:16:55.213 06:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:55.213 06:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:55.213 06:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74868 00:16:55.213 killing process with pid 74868 00:16:55.213 06:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:55.213 06:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:55.214 06:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74868' 00:16:55.214 06:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74868 00:16:55.214 [2024-12-06 06:43:13.701019] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:55.214 06:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74868 00:16:55.214 [2024-12-06 06:43:13.701134] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:55.214 [2024-12-06 06:43:13.701232] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:55.214 [2024-12-06 06:43:13.701252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:55.471 [2024-12-06 06:43:14.053789] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:56.842 ************************************ 00:16:56.842 END TEST raid_superblock_test 00:16:56.842 ************************************ 00:16:56.842 06:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:56.842 00:16:56.842 real 0m9.322s 00:16:56.842 user 0m15.356s 00:16:56.842 sys 0m1.330s 00:16:56.842 06:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:56.842 06:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.842 06:43:15 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:16:56.842 06:43:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:56.842 06:43:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:56.842 06:43:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:56.842 ************************************ 00:16:56.842 START TEST raid_read_error_test 00:16:56.842 ************************************ 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.z8Lz7F4DDt 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75362 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75362 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75362 ']' 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:56.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:56.842 06:43:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.842 [2024-12-06 06:43:15.254727] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:16:56.842 [2024-12-06 06:43:15.254881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75362 ] 00:16:56.842 [2024-12-06 06:43:15.427851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.100 [2024-12-06 06:43:15.559646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.358 [2024-12-06 06:43:15.764192] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.358 [2024-12-06 06:43:15.764251] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.616 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.616 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:16:57.616 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:57.616 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:57.616 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.616 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.897 BaseBdev1_malloc 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.897 true 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.897 [2024-12-06 06:43:16.303845] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:57.897 [2024-12-06 06:43:16.303913] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.897 [2024-12-06 06:43:16.303942] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:57.897 [2024-12-06 06:43:16.303960] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.897 [2024-12-06 06:43:16.306786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.897 [2024-12-06 06:43:16.306836] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:57.897 BaseBdev1 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.897 BaseBdev2_malloc 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.897 true 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.897 [2024-12-06 06:43:16.363421] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:57.897 [2024-12-06 06:43:16.363675] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.897 [2024-12-06 06:43:16.363774] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:57.897 [2024-12-06 06:43:16.363872] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.897 [2024-12-06 06:43:16.366712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.897 [2024-12-06 06:43:16.366850] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:57.897 BaseBdev2 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.897 BaseBdev3_malloc 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.897 true 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.897 [2024-12-06 06:43:16.435835] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:57.897 [2024-12-06 06:43:16.436108] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.897 [2024-12-06 06:43:16.436210] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:57.897 [2024-12-06 06:43:16.436300] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.897 [2024-12-06 06:43:16.439201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.897 [2024-12-06 06:43:16.439335] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:57.897 BaseBdev3 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.897 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.897 BaseBdev4_malloc 00:16:57.898 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.898 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:16:57.898 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.898 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.898 true 00:16:57.898 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.898 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:57.898 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.898 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.898 [2024-12-06 06:43:16.499738] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:57.898 [2024-12-06 06:43:16.499997] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.898 [2024-12-06 06:43:16.500094] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:57.898 [2024-12-06 06:43:16.500183] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.898 [2024-12-06 06:43:16.503011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.898 [2024-12-06 06:43:16.503155] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:57.898 BaseBdev4 00:16:57.898 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.898 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:16:57.898 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.898 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.898 [2024-12-06 06:43:16.511933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:57.898 [2024-12-06 06:43:16.514399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:57.898 [2024-12-06 06:43:16.514513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:57.898 [2024-12-06 06:43:16.514635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:57.898 [2024-12-06 06:43:16.514965] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:57.898 [2024-12-06 06:43:16.514998] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:57.898 [2024-12-06 06:43:16.515336] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:16:57.898 [2024-12-06 06:43:16.515603] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:57.898 [2024-12-06 06:43:16.515628] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:57.898 [2024-12-06 06:43:16.515896] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.898 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.898 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:57.898 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.898 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.898 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.898 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.898 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:57.898 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.898 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.898 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.898 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.898 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.898 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.898 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.898 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.169 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.169 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.169 "name": "raid_bdev1", 00:16:58.169 "uuid": "64f82fa5-9306-4770-810c-a4526e6d7488", 00:16:58.169 "strip_size_kb": 0, 00:16:58.169 "state": "online", 00:16:58.169 "raid_level": "raid1", 00:16:58.169 "superblock": true, 00:16:58.169 "num_base_bdevs": 4, 00:16:58.169 "num_base_bdevs_discovered": 4, 00:16:58.169 "num_base_bdevs_operational": 4, 00:16:58.169 "base_bdevs_list": [ 00:16:58.169 { 00:16:58.169 "name": "BaseBdev1", 00:16:58.169 "uuid": "73cf96e4-5e32-5673-b2a6-ce25fcdee8b3", 00:16:58.169 "is_configured": true, 00:16:58.169 "data_offset": 2048, 00:16:58.169 "data_size": 63488 00:16:58.169 }, 00:16:58.169 { 00:16:58.169 "name": "BaseBdev2", 00:16:58.169 "uuid": "34966967-4a4b-587c-8c94-7e33a9c6f65d", 00:16:58.169 "is_configured": true, 00:16:58.169 "data_offset": 2048, 00:16:58.169 "data_size": 63488 00:16:58.169 }, 00:16:58.169 { 00:16:58.169 "name": "BaseBdev3", 00:16:58.169 "uuid": "4f883bf9-fad9-59ac-9be5-befe1e347936", 00:16:58.169 "is_configured": true, 00:16:58.169 "data_offset": 2048, 00:16:58.169 "data_size": 63488 00:16:58.169 }, 00:16:58.169 { 00:16:58.169 "name": "BaseBdev4", 00:16:58.169 "uuid": "ee840fd3-f1e7-54ff-878d-a2dd6b12164e", 00:16:58.169 "is_configured": true, 00:16:58.169 "data_offset": 2048, 00:16:58.169 "data_size": 63488 00:16:58.169 } 00:16:58.169 ] 00:16:58.169 }' 00:16:58.169 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.169 06:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.428 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:58.428 06:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:58.688 [2024-12-06 06:43:17.133500] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:16:59.624 06:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:59.624 06:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.624 06:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.624 06:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.624 06:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:59.624 06:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:59.624 06:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:16:59.624 06:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:16:59.624 06:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:59.624 06:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.624 06:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.624 06:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.624 06:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.624 06:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:59.624 06:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.624 06:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.624 06:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.624 06:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.624 06:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.624 06:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.624 06:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.624 06:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.624 06:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.624 06:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.624 "name": "raid_bdev1", 00:16:59.624 "uuid": "64f82fa5-9306-4770-810c-a4526e6d7488", 00:16:59.624 "strip_size_kb": 0, 00:16:59.624 "state": "online", 00:16:59.624 "raid_level": "raid1", 00:16:59.624 "superblock": true, 00:16:59.624 "num_base_bdevs": 4, 00:16:59.624 "num_base_bdevs_discovered": 4, 00:16:59.624 "num_base_bdevs_operational": 4, 00:16:59.624 "base_bdevs_list": [ 00:16:59.624 { 00:16:59.624 "name": "BaseBdev1", 00:16:59.624 "uuid": "73cf96e4-5e32-5673-b2a6-ce25fcdee8b3", 00:16:59.624 "is_configured": true, 00:16:59.624 "data_offset": 2048, 00:16:59.624 "data_size": 63488 00:16:59.624 }, 00:16:59.624 { 00:16:59.624 "name": "BaseBdev2", 00:16:59.624 "uuid": "34966967-4a4b-587c-8c94-7e33a9c6f65d", 00:16:59.624 "is_configured": true, 00:16:59.624 "data_offset": 2048, 00:16:59.624 "data_size": 63488 00:16:59.624 }, 00:16:59.624 { 00:16:59.624 "name": "BaseBdev3", 00:16:59.624 "uuid": "4f883bf9-fad9-59ac-9be5-befe1e347936", 00:16:59.624 "is_configured": true, 00:16:59.624 "data_offset": 2048, 00:16:59.624 "data_size": 63488 00:16:59.624 }, 00:16:59.624 { 00:16:59.624 "name": "BaseBdev4", 00:16:59.624 "uuid": "ee840fd3-f1e7-54ff-878d-a2dd6b12164e", 00:16:59.624 "is_configured": true, 00:16:59.624 "data_offset": 2048, 00:16:59.624 "data_size": 63488 00:16:59.624 } 00:16:59.624 ] 00:16:59.624 }' 00:16:59.624 06:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.624 06:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.883 06:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:59.883 06:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.883 06:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.883 [2024-12-06 06:43:18.513876] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:59.883 [2024-12-06 06:43:18.513918] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:59.883 [2024-12-06 06:43:18.517218] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:59.883 [2024-12-06 06:43:18.517302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.883 [2024-12-06 06:43:18.517461] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:59.883 [2024-12-06 06:43:18.517483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:59.883 { 00:16:59.883 "results": [ 00:16:59.883 { 00:16:59.883 "job": "raid_bdev1", 00:16:59.883 "core_mask": "0x1", 00:16:59.883 "workload": "randrw", 00:16:59.883 "percentage": 50, 00:16:59.883 "status": "finished", 00:16:59.883 "queue_depth": 1, 00:16:59.883 "io_size": 131072, 00:16:59.883 "runtime": 1.377941, 00:16:59.883 "iops": 7468.389430316683, 00:16:59.883 "mibps": 933.5486787895853, 00:16:59.883 "io_failed": 0, 00:16:59.883 "io_timeout": 0, 00:16:59.883 "avg_latency_us": 129.48803208452225, 00:16:59.883 "min_latency_us": 42.589090909090906, 00:16:59.883 "max_latency_us": 1832.0290909090909 00:16:59.883 } 00:16:59.883 ], 00:16:59.883 "core_count": 1 00:16:59.883 } 00:16:59.883 06:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.883 06:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75362 00:16:59.883 06:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75362 ']' 00:16:59.883 06:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75362 00:16:59.883 06:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:16:59.883 06:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:00.141 06:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75362 00:17:00.141 06:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:00.141 06:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:00.141 killing process with pid 75362 00:17:00.141 06:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75362' 00:17:00.141 06:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75362 00:17:00.141 [2024-12-06 06:43:18.552776] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:00.141 06:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75362 00:17:00.400 [2024-12-06 06:43:18.847650] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:01.336 06:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.z8Lz7F4DDt 00:17:01.337 06:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:01.337 06:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:01.337 06:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:17:01.337 06:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:17:01.337 06:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:01.337 06:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:01.337 06:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:01.337 00:17:01.337 real 0m4.826s 00:17:01.337 user 0m5.928s 00:17:01.337 sys 0m0.565s 00:17:01.337 06:43:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:01.337 06:43:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.337 ************************************ 00:17:01.337 END TEST raid_read_error_test 00:17:01.337 ************************************ 00:17:01.595 06:43:20 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:17:01.595 06:43:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:01.595 06:43:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:01.595 06:43:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:01.595 ************************************ 00:17:01.595 START TEST raid_write_error_test 00:17:01.595 ************************************ 00:17:01.595 06:43:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:17:01.595 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:17:01.595 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:17:01.595 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:17:01.595 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:17:01.595 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:01.595 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:17:01.595 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:01.595 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:01.595 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:17:01.595 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:01.595 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:01.595 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:17:01.595 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:01.595 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:01.595 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:17:01.596 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:01.596 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:01.596 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:01.596 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:17:01.596 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:17:01.596 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:17:01.596 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:17:01.596 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:17:01.596 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:17:01.596 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:17:01.596 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:17:01.596 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:17:01.596 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.d4ti3ECwVy 00:17:01.596 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75508 00:17:01.596 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75508 00:17:01.596 06:43:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75508 ']' 00:17:01.596 06:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:01.596 06:43:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.596 06:43:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:01.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.596 06:43:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.596 06:43:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:01.596 06:43:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.596 [2024-12-06 06:43:20.150836] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:17:01.596 [2024-12-06 06:43:20.151014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75508 ] 00:17:01.925 [2024-12-06 06:43:20.328273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.925 [2024-12-06 06:43:20.456636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.184 [2024-12-06 06:43:20.659841] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.184 [2024-12-06 06:43:20.659921] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.752 BaseBdev1_malloc 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.752 true 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.752 [2024-12-06 06:43:21.182077] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:02.752 [2024-12-06 06:43:21.182139] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.752 [2024-12-06 06:43:21.182167] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:02.752 [2024-12-06 06:43:21.182186] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.752 [2024-12-06 06:43:21.184981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.752 [2024-12-06 06:43:21.185027] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:02.752 BaseBdev1 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.752 BaseBdev2_malloc 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.752 true 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.752 [2024-12-06 06:43:21.246244] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:02.752 [2024-12-06 06:43:21.246307] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.752 [2024-12-06 06:43:21.246332] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:02.752 [2024-12-06 06:43:21.246348] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.752 [2024-12-06 06:43:21.249068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.752 [2024-12-06 06:43:21.249112] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:02.752 BaseBdev2 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.752 BaseBdev3_malloc 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.752 true 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.752 [2024-12-06 06:43:21.315232] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:02.752 [2024-12-06 06:43:21.315293] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.752 [2024-12-06 06:43:21.315318] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:02.752 [2024-12-06 06:43:21.315335] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.752 [2024-12-06 06:43:21.318084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.752 [2024-12-06 06:43:21.318129] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:02.752 BaseBdev3 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.752 BaseBdev4_malloc 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.752 true 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.752 [2024-12-06 06:43:21.375254] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:02.752 [2024-12-06 06:43:21.375317] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.752 [2024-12-06 06:43:21.375343] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:02.752 [2024-12-06 06:43:21.375361] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.752 [2024-12-06 06:43:21.378089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.752 [2024-12-06 06:43:21.378137] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:02.752 BaseBdev4 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:17:02.752 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.753 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.753 [2024-12-06 06:43:21.383329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:02.753 [2024-12-06 06:43:21.385752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:02.753 [2024-12-06 06:43:21.385872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:02.753 [2024-12-06 06:43:21.385972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:02.753 [2024-12-06 06:43:21.386281] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:17:02.753 [2024-12-06 06:43:21.386313] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:02.753 [2024-12-06 06:43:21.386643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:17:02.753 [2024-12-06 06:43:21.386877] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:17:02.753 [2024-12-06 06:43:21.386910] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:17:02.753 [2024-12-06 06:43:21.387100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.753 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.753 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:02.753 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.753 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.753 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.753 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.753 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:02.753 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.753 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.753 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.753 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.753 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.753 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.753 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.753 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.010 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.010 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.010 "name": "raid_bdev1", 00:17:03.010 "uuid": "033e272c-9005-41e1-b41f-42f2d6fe6cb5", 00:17:03.010 "strip_size_kb": 0, 00:17:03.010 "state": "online", 00:17:03.010 "raid_level": "raid1", 00:17:03.010 "superblock": true, 00:17:03.010 "num_base_bdevs": 4, 00:17:03.010 "num_base_bdevs_discovered": 4, 00:17:03.010 "num_base_bdevs_operational": 4, 00:17:03.010 "base_bdevs_list": [ 00:17:03.010 { 00:17:03.010 "name": "BaseBdev1", 00:17:03.010 "uuid": "8d030475-4dd8-59a2-99a7-0cbed52254d0", 00:17:03.010 "is_configured": true, 00:17:03.010 "data_offset": 2048, 00:17:03.010 "data_size": 63488 00:17:03.010 }, 00:17:03.010 { 00:17:03.010 "name": "BaseBdev2", 00:17:03.010 "uuid": "065fab6c-4a88-5aa9-b469-e6e7e89fa758", 00:17:03.010 "is_configured": true, 00:17:03.010 "data_offset": 2048, 00:17:03.010 "data_size": 63488 00:17:03.010 }, 00:17:03.010 { 00:17:03.010 "name": "BaseBdev3", 00:17:03.010 "uuid": "6c8671fe-b318-5c15-b8f3-8d8284fc4173", 00:17:03.010 "is_configured": true, 00:17:03.010 "data_offset": 2048, 00:17:03.010 "data_size": 63488 00:17:03.010 }, 00:17:03.010 { 00:17:03.010 "name": "BaseBdev4", 00:17:03.010 "uuid": "90ab9dd2-dd44-5bca-bc72-caf04f99ec0c", 00:17:03.010 "is_configured": true, 00:17:03.010 "data_offset": 2048, 00:17:03.010 "data_size": 63488 00:17:03.010 } 00:17:03.010 ] 00:17:03.010 }' 00:17:03.010 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.010 06:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.268 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:17:03.268 06:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:03.526 [2024-12-06 06:43:22.012934] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:17:04.462 06:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:04.462 06:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.462 06:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.462 [2024-12-06 06:43:22.895812] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:17:04.462 [2024-12-06 06:43:22.895873] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:04.462 [2024-12-06 06:43:22.896151] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:17:04.462 06:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.462 06:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:17:04.462 06:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:04.462 06:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:17:04.462 06:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:17:04.462 06:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:04.462 06:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.462 06:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.462 06:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.462 06:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.462 06:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:04.462 06:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.462 06:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.462 06:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.462 06:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.462 06:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.462 06:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.462 06:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.462 06:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.462 06:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.462 06:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.462 "name": "raid_bdev1", 00:17:04.462 "uuid": "033e272c-9005-41e1-b41f-42f2d6fe6cb5", 00:17:04.462 "strip_size_kb": 0, 00:17:04.462 "state": "online", 00:17:04.462 "raid_level": "raid1", 00:17:04.462 "superblock": true, 00:17:04.462 "num_base_bdevs": 4, 00:17:04.462 "num_base_bdevs_discovered": 3, 00:17:04.462 "num_base_bdevs_operational": 3, 00:17:04.462 "base_bdevs_list": [ 00:17:04.462 { 00:17:04.462 "name": null, 00:17:04.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.462 "is_configured": false, 00:17:04.462 "data_offset": 0, 00:17:04.462 "data_size": 63488 00:17:04.462 }, 00:17:04.462 { 00:17:04.462 "name": "BaseBdev2", 00:17:04.462 "uuid": "065fab6c-4a88-5aa9-b469-e6e7e89fa758", 00:17:04.462 "is_configured": true, 00:17:04.462 "data_offset": 2048, 00:17:04.462 "data_size": 63488 00:17:04.462 }, 00:17:04.462 { 00:17:04.462 "name": "BaseBdev3", 00:17:04.462 "uuid": "6c8671fe-b318-5c15-b8f3-8d8284fc4173", 00:17:04.462 "is_configured": true, 00:17:04.462 "data_offset": 2048, 00:17:04.462 "data_size": 63488 00:17:04.462 }, 00:17:04.462 { 00:17:04.462 "name": "BaseBdev4", 00:17:04.462 "uuid": "90ab9dd2-dd44-5bca-bc72-caf04f99ec0c", 00:17:04.462 "is_configured": true, 00:17:04.462 "data_offset": 2048, 00:17:04.462 "data_size": 63488 00:17:04.462 } 00:17:04.462 ] 00:17:04.462 }' 00:17:04.462 06:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.462 06:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.029 06:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:05.029 06:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.029 06:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.029 [2024-12-06 06:43:23.408110] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:05.029 [2024-12-06 06:43:23.408147] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:05.029 [2024-12-06 06:43:23.411467] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:05.029 [2024-12-06 06:43:23.411543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.029 [2024-12-06 06:43:23.411682] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:05.029 [2024-12-06 06:43:23.411702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:17:05.029 { 00:17:05.029 "results": [ 00:17:05.029 { 00:17:05.029 "job": "raid_bdev1", 00:17:05.029 "core_mask": "0x1", 00:17:05.029 "workload": "randrw", 00:17:05.029 "percentage": 50, 00:17:05.029 "status": "finished", 00:17:05.029 "queue_depth": 1, 00:17:05.029 "io_size": 131072, 00:17:05.029 "runtime": 1.392586, 00:17:05.029 "iops": 8165.384399958064, 00:17:05.029 "mibps": 1020.673049994758, 00:17:05.029 "io_failed": 0, 00:17:05.029 "io_timeout": 0, 00:17:05.029 "avg_latency_us": 118.03369144794175, 00:17:05.029 "min_latency_us": 42.589090909090906, 00:17:05.029 "max_latency_us": 1899.0545454545454 00:17:05.029 } 00:17:05.029 ], 00:17:05.029 "core_count": 1 00:17:05.029 } 00:17:05.029 06:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.029 06:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75508 00:17:05.030 06:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75508 ']' 00:17:05.030 06:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75508 00:17:05.030 06:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:17:05.030 06:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:05.030 06:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75508 00:17:05.030 06:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:05.030 06:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:05.030 06:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75508' 00:17:05.030 killing process with pid 75508 00:17:05.030 06:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75508 00:17:05.030 [2024-12-06 06:43:23.448607] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:05.030 06:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75508 00:17:05.288 [2024-12-06 06:43:23.740751] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:06.223 06:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.d4ti3ECwVy 00:17:06.223 06:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:06.223 06:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:06.223 06:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:17:06.223 06:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:17:06.223 06:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:06.223 06:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:06.223 06:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:06.223 00:17:06.223 real 0m4.819s 00:17:06.223 user 0m5.918s 00:17:06.223 sys 0m0.590s 00:17:06.223 06:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:06.223 ************************************ 00:17:06.223 END TEST raid_write_error_test 00:17:06.223 ************************************ 00:17:06.223 06:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.482 06:43:24 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:17:06.482 06:43:24 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:17:06.482 06:43:24 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:17:06.482 06:43:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:06.482 06:43:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:06.482 06:43:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:06.482 ************************************ 00:17:06.482 START TEST raid_rebuild_test 00:17:06.482 ************************************ 00:17:06.482 06:43:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:17:06.482 06:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:06.482 06:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:06.482 06:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:06.482 06:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:06.482 06:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:06.482 06:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:06.482 06:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:06.482 06:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:06.482 06:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:06.482 06:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:06.482 06:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:06.482 06:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:06.482 06:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:06.482 06:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:06.483 06:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:06.483 06:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:06.483 06:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:06.483 06:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:06.483 06:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:06.483 06:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:06.483 06:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:06.483 06:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:06.483 06:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:06.483 06:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75656 00:17:06.483 06:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75656 00:17:06.483 06:43:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75656 ']' 00:17:06.483 06:43:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:06.483 06:43:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.483 06:43:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.483 06:43:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.483 06:43:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.483 06:43:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.483 [2024-12-06 06:43:25.012083] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:17:06.483 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:06.483 Zero copy mechanism will not be used. 00:17:06.483 [2024-12-06 06:43:25.012282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75656 ] 00:17:06.741 [2024-12-06 06:43:25.200722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.741 [2024-12-06 06:43:25.353896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.999 [2024-12-06 06:43:25.565688] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.999 [2024-12-06 06:43:25.565761] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:07.566 06:43:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:07.566 06:43:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:17:07.566 06:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:07.566 06:43:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:07.566 06:43:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.566 06:43:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.566 BaseBdev1_malloc 00:17:07.566 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.566 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:07.566 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.566 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.566 [2024-12-06 06:43:26.061255] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:07.566 [2024-12-06 06:43:26.061370] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.566 [2024-12-06 06:43:26.061420] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:07.566 [2024-12-06 06:43:26.061449] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.566 [2024-12-06 06:43:26.065194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.566 [2024-12-06 06:43:26.065263] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:07.566 BaseBdev1 00:17:07.566 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.566 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:07.566 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:07.566 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.566 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.566 BaseBdev2_malloc 00:17:07.566 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.566 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:07.566 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.566 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.566 [2024-12-06 06:43:26.129013] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:07.566 [2024-12-06 06:43:26.129094] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.566 [2024-12-06 06:43:26.129128] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:07.566 [2024-12-06 06:43:26.129147] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.566 [2024-12-06 06:43:26.131917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.566 [2024-12-06 06:43:26.131966] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:07.566 BaseBdev2 00:17:07.566 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.566 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:07.566 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.566 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.566 spare_malloc 00:17:07.566 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.566 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:07.566 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.566 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.566 spare_delay 00:17:07.566 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.566 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:07.566 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.566 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.566 [2024-12-06 06:43:26.196886] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:07.566 [2024-12-06 06:43:26.196964] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.566 [2024-12-06 06:43:26.196997] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:07.566 [2024-12-06 06:43:26.197016] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.566 [2024-12-06 06:43:26.199826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.566 [2024-12-06 06:43:26.199883] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:07.566 spare 00:17:07.566 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.566 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:07.566 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.566 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.566 [2024-12-06 06:43:26.204952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:07.566 [2024-12-06 06:43:26.207342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:07.566 [2024-12-06 06:43:26.207474] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:07.566 [2024-12-06 06:43:26.207498] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:07.566 [2024-12-06 06:43:26.207851] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:07.566 [2024-12-06 06:43:26.208083] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:07.566 [2024-12-06 06:43:26.208112] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:07.566 [2024-12-06 06:43:26.208304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.567 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.567 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:07.567 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.567 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.567 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.825 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.825 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:07.825 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.825 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.825 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.825 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.825 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.825 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.825 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.825 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.825 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.825 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.825 "name": "raid_bdev1", 00:17:07.825 "uuid": "d3e5ddd5-a8e7-47e8-9485-39438363c22f", 00:17:07.825 "strip_size_kb": 0, 00:17:07.825 "state": "online", 00:17:07.825 "raid_level": "raid1", 00:17:07.825 "superblock": false, 00:17:07.825 "num_base_bdevs": 2, 00:17:07.825 "num_base_bdevs_discovered": 2, 00:17:07.825 "num_base_bdevs_operational": 2, 00:17:07.825 "base_bdevs_list": [ 00:17:07.825 { 00:17:07.825 "name": "BaseBdev1", 00:17:07.825 "uuid": "1674f24d-1d07-59b6-918a-82783c75dcbe", 00:17:07.825 "is_configured": true, 00:17:07.825 "data_offset": 0, 00:17:07.825 "data_size": 65536 00:17:07.825 }, 00:17:07.825 { 00:17:07.825 "name": "BaseBdev2", 00:17:07.825 "uuid": "db927ecf-867b-5658-be52-e454a00ca11a", 00:17:07.825 "is_configured": true, 00:17:07.825 "data_offset": 0, 00:17:07.825 "data_size": 65536 00:17:07.825 } 00:17:07.825 ] 00:17:07.825 }' 00:17:07.825 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.825 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.083 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:08.083 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:08.083 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.083 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.342 [2024-12-06 06:43:26.729497] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:08.342 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.342 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:17:08.342 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:08.342 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.342 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.342 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.342 06:43:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.342 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:08.342 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:08.342 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:08.342 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:08.342 06:43:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:08.342 06:43:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:08.342 06:43:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:08.342 06:43:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:08.342 06:43:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:08.342 06:43:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:08.342 06:43:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:08.342 06:43:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:08.342 06:43:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:08.342 06:43:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:08.600 [2024-12-06 06:43:27.093282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:08.600 /dev/nbd0 00:17:08.600 06:43:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:08.600 06:43:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:08.600 06:43:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:08.600 06:43:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:08.600 06:43:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:08.600 06:43:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:08.600 06:43:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:08.600 06:43:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:08.600 06:43:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:08.600 06:43:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:08.600 06:43:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:08.600 1+0 records in 00:17:08.600 1+0 records out 00:17:08.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347067 s, 11.8 MB/s 00:17:08.600 06:43:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:08.600 06:43:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:08.600 06:43:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:08.600 06:43:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:08.600 06:43:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:08.600 06:43:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:08.600 06:43:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:08.600 06:43:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:08.600 06:43:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:08.600 06:43:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:17:15.154 65536+0 records in 00:17:15.154 65536+0 records out 00:17:15.154 33554432 bytes (34 MB, 32 MiB) copied, 6.59363 s, 5.1 MB/s 00:17:15.154 06:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:15.154 06:43:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:15.154 06:43:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:15.154 06:43:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:15.154 06:43:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:15.154 06:43:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:15.154 06:43:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:15.413 [2024-12-06 06:43:34.018466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.413 06:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:15.413 06:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:15.413 06:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:15.413 06:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:15.413 06:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:15.413 06:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:15.413 06:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:15.413 06:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:15.413 06:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:15.413 06:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.413 06:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.413 [2024-12-06 06:43:34.054592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:15.672 06:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.672 06:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:15.672 06:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.672 06:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.672 06:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.672 06:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.672 06:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:15.672 06:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.672 06:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.672 06:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.672 06:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.672 06:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.672 06:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.672 06:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.672 06:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.672 06:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.672 06:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.672 "name": "raid_bdev1", 00:17:15.672 "uuid": "d3e5ddd5-a8e7-47e8-9485-39438363c22f", 00:17:15.672 "strip_size_kb": 0, 00:17:15.672 "state": "online", 00:17:15.672 "raid_level": "raid1", 00:17:15.672 "superblock": false, 00:17:15.672 "num_base_bdevs": 2, 00:17:15.672 "num_base_bdevs_discovered": 1, 00:17:15.672 "num_base_bdevs_operational": 1, 00:17:15.672 "base_bdevs_list": [ 00:17:15.672 { 00:17:15.672 "name": null, 00:17:15.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.672 "is_configured": false, 00:17:15.672 "data_offset": 0, 00:17:15.672 "data_size": 65536 00:17:15.672 }, 00:17:15.672 { 00:17:15.672 "name": "BaseBdev2", 00:17:15.672 "uuid": "db927ecf-867b-5658-be52-e454a00ca11a", 00:17:15.672 "is_configured": true, 00:17:15.672 "data_offset": 0, 00:17:15.672 "data_size": 65536 00:17:15.672 } 00:17:15.672 ] 00:17:15.672 }' 00:17:15.672 06:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.672 06:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.931 06:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:15.931 06:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.931 06:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.931 [2024-12-06 06:43:34.538741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:15.931 [2024-12-06 06:43:34.555276] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:17:15.931 06:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.931 06:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:15.931 [2024-12-06 06:43:34.557856] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.310 "name": "raid_bdev1", 00:17:17.310 "uuid": "d3e5ddd5-a8e7-47e8-9485-39438363c22f", 00:17:17.310 "strip_size_kb": 0, 00:17:17.310 "state": "online", 00:17:17.310 "raid_level": "raid1", 00:17:17.310 "superblock": false, 00:17:17.310 "num_base_bdevs": 2, 00:17:17.310 "num_base_bdevs_discovered": 2, 00:17:17.310 "num_base_bdevs_operational": 2, 00:17:17.310 "process": { 00:17:17.310 "type": "rebuild", 00:17:17.310 "target": "spare", 00:17:17.310 "progress": { 00:17:17.310 "blocks": 20480, 00:17:17.310 "percent": 31 00:17:17.310 } 00:17:17.310 }, 00:17:17.310 "base_bdevs_list": [ 00:17:17.310 { 00:17:17.310 "name": "spare", 00:17:17.310 "uuid": "bf5685b9-c4db-5181-84e5-c2e348150f01", 00:17:17.310 "is_configured": true, 00:17:17.310 "data_offset": 0, 00:17:17.310 "data_size": 65536 00:17:17.310 }, 00:17:17.310 { 00:17:17.310 "name": "BaseBdev2", 00:17:17.310 "uuid": "db927ecf-867b-5658-be52-e454a00ca11a", 00:17:17.310 "is_configured": true, 00:17:17.310 "data_offset": 0, 00:17:17.310 "data_size": 65536 00:17:17.310 } 00:17:17.310 ] 00:17:17.310 }' 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.310 [2024-12-06 06:43:35.711292] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.310 [2024-12-06 06:43:35.766598] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:17.310 [2024-12-06 06:43:35.766883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.310 [2024-12-06 06:43:35.767040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.310 [2024-12-06 06:43:35.767101] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.310 "name": "raid_bdev1", 00:17:17.310 "uuid": "d3e5ddd5-a8e7-47e8-9485-39438363c22f", 00:17:17.310 "strip_size_kb": 0, 00:17:17.310 "state": "online", 00:17:17.310 "raid_level": "raid1", 00:17:17.310 "superblock": false, 00:17:17.310 "num_base_bdevs": 2, 00:17:17.310 "num_base_bdevs_discovered": 1, 00:17:17.310 "num_base_bdevs_operational": 1, 00:17:17.310 "base_bdevs_list": [ 00:17:17.310 { 00:17:17.310 "name": null, 00:17:17.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.310 "is_configured": false, 00:17:17.310 "data_offset": 0, 00:17:17.310 "data_size": 65536 00:17:17.310 }, 00:17:17.310 { 00:17:17.310 "name": "BaseBdev2", 00:17:17.310 "uuid": "db927ecf-867b-5658-be52-e454a00ca11a", 00:17:17.310 "is_configured": true, 00:17:17.310 "data_offset": 0, 00:17:17.310 "data_size": 65536 00:17:17.310 } 00:17:17.310 ] 00:17:17.310 }' 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.310 06:43:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.878 06:43:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:17.878 06:43:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.878 06:43:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:17.878 06:43:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:17.878 06:43:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.878 06:43:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.878 06:43:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.878 06:43:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.878 06:43:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.878 06:43:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.878 06:43:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.878 "name": "raid_bdev1", 00:17:17.878 "uuid": "d3e5ddd5-a8e7-47e8-9485-39438363c22f", 00:17:17.878 "strip_size_kb": 0, 00:17:17.878 "state": "online", 00:17:17.878 "raid_level": "raid1", 00:17:17.878 "superblock": false, 00:17:17.878 "num_base_bdevs": 2, 00:17:17.878 "num_base_bdevs_discovered": 1, 00:17:17.878 "num_base_bdevs_operational": 1, 00:17:17.878 "base_bdevs_list": [ 00:17:17.878 { 00:17:17.878 "name": null, 00:17:17.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.878 "is_configured": false, 00:17:17.878 "data_offset": 0, 00:17:17.878 "data_size": 65536 00:17:17.878 }, 00:17:17.878 { 00:17:17.878 "name": "BaseBdev2", 00:17:17.878 "uuid": "db927ecf-867b-5658-be52-e454a00ca11a", 00:17:17.878 "is_configured": true, 00:17:17.878 "data_offset": 0, 00:17:17.878 "data_size": 65536 00:17:17.878 } 00:17:17.878 ] 00:17:17.878 }' 00:17:17.878 06:43:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.878 06:43:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:17.878 06:43:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.878 06:43:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:17.878 06:43:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:17.878 06:43:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.878 06:43:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.878 [2024-12-06 06:43:36.499574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:17.878 [2024-12-06 06:43:36.515667] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:17:17.878 06:43:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.878 06:43:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:17.878 [2024-12-06 06:43:36.518124] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.252 "name": "raid_bdev1", 00:17:19.252 "uuid": "d3e5ddd5-a8e7-47e8-9485-39438363c22f", 00:17:19.252 "strip_size_kb": 0, 00:17:19.252 "state": "online", 00:17:19.252 "raid_level": "raid1", 00:17:19.252 "superblock": false, 00:17:19.252 "num_base_bdevs": 2, 00:17:19.252 "num_base_bdevs_discovered": 2, 00:17:19.252 "num_base_bdevs_operational": 2, 00:17:19.252 "process": { 00:17:19.252 "type": "rebuild", 00:17:19.252 "target": "spare", 00:17:19.252 "progress": { 00:17:19.252 "blocks": 20480, 00:17:19.252 "percent": 31 00:17:19.252 } 00:17:19.252 }, 00:17:19.252 "base_bdevs_list": [ 00:17:19.252 { 00:17:19.252 "name": "spare", 00:17:19.252 "uuid": "bf5685b9-c4db-5181-84e5-c2e348150f01", 00:17:19.252 "is_configured": true, 00:17:19.252 "data_offset": 0, 00:17:19.252 "data_size": 65536 00:17:19.252 }, 00:17:19.252 { 00:17:19.252 "name": "BaseBdev2", 00:17:19.252 "uuid": "db927ecf-867b-5658-be52-e454a00ca11a", 00:17:19.252 "is_configured": true, 00:17:19.252 "data_offset": 0, 00:17:19.252 "data_size": 65536 00:17:19.252 } 00:17:19.252 ] 00:17:19.252 }' 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=397 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.252 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.252 "name": "raid_bdev1", 00:17:19.252 "uuid": "d3e5ddd5-a8e7-47e8-9485-39438363c22f", 00:17:19.252 "strip_size_kb": 0, 00:17:19.252 "state": "online", 00:17:19.252 "raid_level": "raid1", 00:17:19.252 "superblock": false, 00:17:19.252 "num_base_bdevs": 2, 00:17:19.252 "num_base_bdevs_discovered": 2, 00:17:19.252 "num_base_bdevs_operational": 2, 00:17:19.252 "process": { 00:17:19.252 "type": "rebuild", 00:17:19.252 "target": "spare", 00:17:19.252 "progress": { 00:17:19.252 "blocks": 22528, 00:17:19.252 "percent": 34 00:17:19.252 } 00:17:19.252 }, 00:17:19.252 "base_bdevs_list": [ 00:17:19.253 { 00:17:19.253 "name": "spare", 00:17:19.253 "uuid": "bf5685b9-c4db-5181-84e5-c2e348150f01", 00:17:19.253 "is_configured": true, 00:17:19.253 "data_offset": 0, 00:17:19.253 "data_size": 65536 00:17:19.253 }, 00:17:19.253 { 00:17:19.253 "name": "BaseBdev2", 00:17:19.253 "uuid": "db927ecf-867b-5658-be52-e454a00ca11a", 00:17:19.253 "is_configured": true, 00:17:19.253 "data_offset": 0, 00:17:19.253 "data_size": 65536 00:17:19.253 } 00:17:19.253 ] 00:17:19.253 }' 00:17:19.253 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.253 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.253 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.253 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.253 06:43:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:20.630 06:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:20.630 06:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.630 06:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.630 06:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.630 06:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.630 06:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.630 06:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.630 06:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.630 06:43:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.630 06:43:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.630 06:43:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.630 06:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.630 "name": "raid_bdev1", 00:17:20.630 "uuid": "d3e5ddd5-a8e7-47e8-9485-39438363c22f", 00:17:20.630 "strip_size_kb": 0, 00:17:20.630 "state": "online", 00:17:20.630 "raid_level": "raid1", 00:17:20.630 "superblock": false, 00:17:20.630 "num_base_bdevs": 2, 00:17:20.630 "num_base_bdevs_discovered": 2, 00:17:20.630 "num_base_bdevs_operational": 2, 00:17:20.630 "process": { 00:17:20.630 "type": "rebuild", 00:17:20.630 "target": "spare", 00:17:20.630 "progress": { 00:17:20.630 "blocks": 47104, 00:17:20.630 "percent": 71 00:17:20.630 } 00:17:20.630 }, 00:17:20.630 "base_bdevs_list": [ 00:17:20.630 { 00:17:20.630 "name": "spare", 00:17:20.630 "uuid": "bf5685b9-c4db-5181-84e5-c2e348150f01", 00:17:20.630 "is_configured": true, 00:17:20.630 "data_offset": 0, 00:17:20.630 "data_size": 65536 00:17:20.630 }, 00:17:20.630 { 00:17:20.630 "name": "BaseBdev2", 00:17:20.630 "uuid": "db927ecf-867b-5658-be52-e454a00ca11a", 00:17:20.630 "is_configured": true, 00:17:20.630 "data_offset": 0, 00:17:20.630 "data_size": 65536 00:17:20.630 } 00:17:20.630 ] 00:17:20.630 }' 00:17:20.630 06:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.630 06:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.630 06:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.630 06:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.630 06:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:21.206 [2024-12-06 06:43:39.741902] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:21.207 [2024-12-06 06:43:39.742163] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:21.207 [2024-12-06 06:43:39.742247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.465 06:43:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:21.465 06:43:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:21.465 06:43:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.465 06:43:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:21.465 06:43:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:21.465 06:43:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.465 06:43:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.465 06:43:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.465 06:43:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.465 06:43:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.465 06:43:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.465 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.465 "name": "raid_bdev1", 00:17:21.465 "uuid": "d3e5ddd5-a8e7-47e8-9485-39438363c22f", 00:17:21.465 "strip_size_kb": 0, 00:17:21.465 "state": "online", 00:17:21.465 "raid_level": "raid1", 00:17:21.465 "superblock": false, 00:17:21.465 "num_base_bdevs": 2, 00:17:21.465 "num_base_bdevs_discovered": 2, 00:17:21.465 "num_base_bdevs_operational": 2, 00:17:21.465 "base_bdevs_list": [ 00:17:21.465 { 00:17:21.465 "name": "spare", 00:17:21.465 "uuid": "bf5685b9-c4db-5181-84e5-c2e348150f01", 00:17:21.465 "is_configured": true, 00:17:21.465 "data_offset": 0, 00:17:21.465 "data_size": 65536 00:17:21.465 }, 00:17:21.465 { 00:17:21.465 "name": "BaseBdev2", 00:17:21.465 "uuid": "db927ecf-867b-5658-be52-e454a00ca11a", 00:17:21.465 "is_configured": true, 00:17:21.465 "data_offset": 0, 00:17:21.465 "data_size": 65536 00:17:21.465 } 00:17:21.465 ] 00:17:21.465 }' 00:17:21.465 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.465 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:21.465 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.723 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:21.723 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:21.723 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:21.723 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.723 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:21.723 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:21.723 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.723 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.723 06:43:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.723 06:43:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.723 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.723 06:43:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.723 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.723 "name": "raid_bdev1", 00:17:21.723 "uuid": "d3e5ddd5-a8e7-47e8-9485-39438363c22f", 00:17:21.723 "strip_size_kb": 0, 00:17:21.723 "state": "online", 00:17:21.723 "raid_level": "raid1", 00:17:21.723 "superblock": false, 00:17:21.723 "num_base_bdevs": 2, 00:17:21.723 "num_base_bdevs_discovered": 2, 00:17:21.723 "num_base_bdevs_operational": 2, 00:17:21.723 "base_bdevs_list": [ 00:17:21.723 { 00:17:21.723 "name": "spare", 00:17:21.723 "uuid": "bf5685b9-c4db-5181-84e5-c2e348150f01", 00:17:21.723 "is_configured": true, 00:17:21.723 "data_offset": 0, 00:17:21.723 "data_size": 65536 00:17:21.723 }, 00:17:21.723 { 00:17:21.723 "name": "BaseBdev2", 00:17:21.723 "uuid": "db927ecf-867b-5658-be52-e454a00ca11a", 00:17:21.723 "is_configured": true, 00:17:21.723 "data_offset": 0, 00:17:21.723 "data_size": 65536 00:17:21.723 } 00:17:21.723 ] 00:17:21.723 }' 00:17:21.723 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.723 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:21.723 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.723 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:21.723 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:21.723 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.723 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.723 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:21.724 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:21.724 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:21.724 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.724 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.724 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.724 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.724 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.724 06:43:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.724 06:43:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.724 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.724 06:43:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.981 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.981 "name": "raid_bdev1", 00:17:21.981 "uuid": "d3e5ddd5-a8e7-47e8-9485-39438363c22f", 00:17:21.981 "strip_size_kb": 0, 00:17:21.981 "state": "online", 00:17:21.981 "raid_level": "raid1", 00:17:21.981 "superblock": false, 00:17:21.981 "num_base_bdevs": 2, 00:17:21.981 "num_base_bdevs_discovered": 2, 00:17:21.981 "num_base_bdevs_operational": 2, 00:17:21.981 "base_bdevs_list": [ 00:17:21.981 { 00:17:21.981 "name": "spare", 00:17:21.981 "uuid": "bf5685b9-c4db-5181-84e5-c2e348150f01", 00:17:21.981 "is_configured": true, 00:17:21.981 "data_offset": 0, 00:17:21.981 "data_size": 65536 00:17:21.981 }, 00:17:21.981 { 00:17:21.981 "name": "BaseBdev2", 00:17:21.981 "uuid": "db927ecf-867b-5658-be52-e454a00ca11a", 00:17:21.981 "is_configured": true, 00:17:21.981 "data_offset": 0, 00:17:21.981 "data_size": 65536 00:17:21.981 } 00:17:21.981 ] 00:17:21.981 }' 00:17:21.981 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.982 06:43:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.240 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:22.240 06:43:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.240 06:43:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.240 [2024-12-06 06:43:40.830363] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:22.240 [2024-12-06 06:43:40.831459] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:22.240 [2024-12-06 06:43:40.831623] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:22.240 [2024-12-06 06:43:40.831738] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:22.240 [2024-12-06 06:43:40.831758] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:22.240 06:43:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.240 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.240 06:43:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.240 06:43:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.240 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:22.240 06:43:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.498 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:22.498 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:22.498 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:22.498 06:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:22.498 06:43:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:22.498 06:43:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:22.498 06:43:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:22.498 06:43:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:22.498 06:43:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:22.498 06:43:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:22.498 06:43:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:22.498 06:43:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:22.498 06:43:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:22.757 /dev/nbd0 00:17:22.757 06:43:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:22.757 06:43:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:22.757 06:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:22.757 06:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:22.757 06:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:22.757 06:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:22.757 06:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:22.757 06:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:22.757 06:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:22.757 06:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:22.757 06:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:22.757 1+0 records in 00:17:22.757 1+0 records out 00:17:22.757 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000648405 s, 6.3 MB/s 00:17:22.757 06:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.757 06:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:22.757 06:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.757 06:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:22.757 06:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:22.757 06:43:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:22.757 06:43:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:22.757 06:43:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:23.015 /dev/nbd1 00:17:23.015 06:43:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:23.015 06:43:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:23.015 06:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:23.015 06:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:23.015 06:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:23.015 06:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:23.015 06:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:23.015 06:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:23.015 06:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:23.015 06:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:23.015 06:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:23.015 1+0 records in 00:17:23.015 1+0 records out 00:17:23.015 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407861 s, 10.0 MB/s 00:17:23.015 06:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.015 06:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:23.015 06:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.015 06:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:23.015 06:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:23.015 06:43:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:23.016 06:43:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:23.016 06:43:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:23.274 06:43:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:23.274 06:43:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:23.274 06:43:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:23.274 06:43:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:23.274 06:43:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:23.274 06:43:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:23.274 06:43:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:23.533 06:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:23.533 06:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:23.533 06:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:23.533 06:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:23.533 06:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:23.533 06:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:23.533 06:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:23.533 06:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:23.533 06:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:23.533 06:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:24.102 06:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:24.102 06:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:24.102 06:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:24.102 06:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:24.102 06:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:24.102 06:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:24.102 06:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:24.102 06:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:24.102 06:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:24.102 06:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75656 00:17:24.102 06:43:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75656 ']' 00:17:24.102 06:43:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75656 00:17:24.102 06:43:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:24.102 06:43:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:24.102 06:43:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75656 00:17:24.102 06:43:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:24.102 06:43:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:24.102 killing process with pid 75656 00:17:24.102 Received shutdown signal, test time was about 60.000000 seconds 00:17:24.102 00:17:24.102 Latency(us) 00:17:24.102 [2024-12-06T06:43:42.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.102 [2024-12-06T06:43:42.749Z] =================================================================================================================== 00:17:24.102 [2024-12-06T06:43:42.749Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:24.102 06:43:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75656' 00:17:24.102 06:43:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75656 00:17:24.102 [2024-12-06 06:43:42.519553] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:24.102 06:43:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75656 00:17:24.361 [2024-12-06 06:43:42.790408] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:25.298 00:17:25.298 real 0m18.960s 00:17:25.298 user 0m21.489s 00:17:25.298 sys 0m3.746s 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:25.298 ************************************ 00:17:25.298 END TEST raid_rebuild_test 00:17:25.298 ************************************ 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.298 06:43:43 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:17:25.298 06:43:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:25.298 06:43:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:25.298 06:43:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:25.298 ************************************ 00:17:25.298 START TEST raid_rebuild_test_sb 00:17:25.298 ************************************ 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76104 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76104 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 76104 ']' 00:17:25.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:25.298 06:43:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.556 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:25.556 Zero copy mechanism will not be used. 00:17:25.556 [2024-12-06 06:43:44.026260] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:17:25.556 [2024-12-06 06:43:44.026438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76104 ] 00:17:25.815 [2024-12-06 06:43:44.215856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.815 [2024-12-06 06:43:44.390147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.074 [2024-12-06 06:43:44.595343] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:26.074 [2024-12-06 06:43:44.595416] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:26.642 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:26.642 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:26.642 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:26.642 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:26.642 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.642 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.642 BaseBdev1_malloc 00:17:26.642 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.642 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:26.642 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.642 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.642 [2024-12-06 06:43:45.135817] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:26.642 [2024-12-06 06:43:45.136032] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.642 [2024-12-06 06:43:45.136115] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:26.642 [2024-12-06 06:43:45.136296] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.642 [2024-12-06 06:43:45.139124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.642 BaseBdev1 00:17:26.642 [2024-12-06 06:43:45.139304] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:26.642 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.642 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:26.642 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:26.642 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.642 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.642 BaseBdev2_malloc 00:17:26.642 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.642 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:26.642 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.642 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.642 [2024-12-06 06:43:45.188994] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:26.642 [2024-12-06 06:43:45.189224] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.642 [2024-12-06 06:43:45.189302] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:26.643 [2024-12-06 06:43:45.189428] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.643 [2024-12-06 06:43:45.192282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.643 [2024-12-06 06:43:45.192478] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:26.643 BaseBdev2 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.643 spare_malloc 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.643 spare_delay 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.643 [2024-12-06 06:43:45.257461] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:26.643 [2024-12-06 06:43:45.257687] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.643 [2024-12-06 06:43:45.257726] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:26.643 [2024-12-06 06:43:45.257746] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.643 [2024-12-06 06:43:45.260577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.643 [2024-12-06 06:43:45.260628] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:26.643 spare 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.643 [2024-12-06 06:43:45.265584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:26.643 [2024-12-06 06:43:45.268174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:26.643 [2024-12-06 06:43:45.268578] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:26.643 [2024-12-06 06:43:45.268718] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:26.643 [2024-12-06 06:43:45.269073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:26.643 [2024-12-06 06:43:45.269429] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:26.643 [2024-12-06 06:43:45.269568] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:26.643 [2024-12-06 06:43:45.269949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.643 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.901 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.901 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.901 "name": "raid_bdev1", 00:17:26.901 "uuid": "9721efdf-acb5-493f-9d96-f9232cc8bff3", 00:17:26.901 "strip_size_kb": 0, 00:17:26.901 "state": "online", 00:17:26.901 "raid_level": "raid1", 00:17:26.901 "superblock": true, 00:17:26.901 "num_base_bdevs": 2, 00:17:26.901 "num_base_bdevs_discovered": 2, 00:17:26.901 "num_base_bdevs_operational": 2, 00:17:26.901 "base_bdevs_list": [ 00:17:26.901 { 00:17:26.901 "name": "BaseBdev1", 00:17:26.901 "uuid": "c84f2726-2087-52b1-a3d4-8964281717b6", 00:17:26.901 "is_configured": true, 00:17:26.901 "data_offset": 2048, 00:17:26.901 "data_size": 63488 00:17:26.901 }, 00:17:26.901 { 00:17:26.901 "name": "BaseBdev2", 00:17:26.901 "uuid": "a7bb797c-ec69-5f73-8db9-6349f32e9ac0", 00:17:26.901 "is_configured": true, 00:17:26.901 "data_offset": 2048, 00:17:26.901 "data_size": 63488 00:17:26.901 } 00:17:26.901 ] 00:17:26.901 }' 00:17:26.901 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.901 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.160 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:27.160 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.160 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:27.160 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.160 [2024-12-06 06:43:45.770447] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:27.160 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.419 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:17:27.419 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:27.419 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.419 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.419 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.419 06:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.419 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:27.419 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:27.419 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:27.419 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:27.419 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:27.419 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:27.419 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:27.419 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:27.419 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:27.419 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:27.419 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:27.419 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:27.419 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:27.419 06:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:27.676 [2024-12-06 06:43:46.178277] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:27.676 /dev/nbd0 00:17:27.676 06:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:27.676 06:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:27.676 06:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:27.676 06:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:27.676 06:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:27.676 06:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:27.676 06:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:27.676 06:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:27.676 06:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:27.676 06:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:27.676 06:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:27.676 1+0 records in 00:17:27.676 1+0 records out 00:17:27.676 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257724 s, 15.9 MB/s 00:17:27.676 06:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.676 06:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:27.676 06:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.676 06:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:27.676 06:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:27.676 06:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:27.676 06:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:27.676 06:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:27.676 06:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:27.676 06:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:17:34.236 63488+0 records in 00:17:34.236 63488+0 records out 00:17:34.236 32505856 bytes (33 MB, 31 MiB) copied, 6.10913 s, 5.3 MB/s 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:34.236 [2024-12-06 06:43:52.641939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.236 [2024-12-06 06:43:52.674012] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.236 "name": "raid_bdev1", 00:17:34.236 "uuid": "9721efdf-acb5-493f-9d96-f9232cc8bff3", 00:17:34.236 "strip_size_kb": 0, 00:17:34.236 "state": "online", 00:17:34.236 "raid_level": "raid1", 00:17:34.236 "superblock": true, 00:17:34.236 "num_base_bdevs": 2, 00:17:34.236 "num_base_bdevs_discovered": 1, 00:17:34.236 "num_base_bdevs_operational": 1, 00:17:34.236 "base_bdevs_list": [ 00:17:34.236 { 00:17:34.236 "name": null, 00:17:34.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.236 "is_configured": false, 00:17:34.236 "data_offset": 0, 00:17:34.236 "data_size": 63488 00:17:34.236 }, 00:17:34.236 { 00:17:34.236 "name": "BaseBdev2", 00:17:34.236 "uuid": "a7bb797c-ec69-5f73-8db9-6349f32e9ac0", 00:17:34.236 "is_configured": true, 00:17:34.236 "data_offset": 2048, 00:17:34.236 "data_size": 63488 00:17:34.236 } 00:17:34.236 ] 00:17:34.236 }' 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.236 06:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.805 06:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:34.805 06:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.805 06:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.805 [2024-12-06 06:43:53.202236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:34.805 [2024-12-06 06:43:53.218986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:17:34.805 06:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.805 06:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:34.805 [2024-12-06 06:43:53.221595] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:35.741 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:35.741 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.741 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:35.741 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:35.741 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.741 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.741 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.741 06:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.741 06:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.741 06:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.741 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.741 "name": "raid_bdev1", 00:17:35.741 "uuid": "9721efdf-acb5-493f-9d96-f9232cc8bff3", 00:17:35.741 "strip_size_kb": 0, 00:17:35.741 "state": "online", 00:17:35.741 "raid_level": "raid1", 00:17:35.741 "superblock": true, 00:17:35.741 "num_base_bdevs": 2, 00:17:35.741 "num_base_bdevs_discovered": 2, 00:17:35.741 "num_base_bdevs_operational": 2, 00:17:35.741 "process": { 00:17:35.741 "type": "rebuild", 00:17:35.741 "target": "spare", 00:17:35.741 "progress": { 00:17:35.741 "blocks": 20480, 00:17:35.741 "percent": 32 00:17:35.741 } 00:17:35.741 }, 00:17:35.741 "base_bdevs_list": [ 00:17:35.741 { 00:17:35.741 "name": "spare", 00:17:35.741 "uuid": "99ddc879-a88c-58d8-a9b1-939ff4dc03e5", 00:17:35.741 "is_configured": true, 00:17:35.741 "data_offset": 2048, 00:17:35.741 "data_size": 63488 00:17:35.741 }, 00:17:35.741 { 00:17:35.741 "name": "BaseBdev2", 00:17:35.741 "uuid": "a7bb797c-ec69-5f73-8db9-6349f32e9ac0", 00:17:35.741 "is_configured": true, 00:17:35.741 "data_offset": 2048, 00:17:35.741 "data_size": 63488 00:17:35.741 } 00:17:35.741 ] 00:17:35.741 }' 00:17:35.741 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.741 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:35.741 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.741 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:35.741 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:35.741 06:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.741 06:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.000 [2024-12-06 06:43:54.390924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:36.000 [2024-12-06 06:43:54.430968] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:36.000 [2024-12-06 06:43:54.431266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.000 [2024-12-06 06:43:54.431296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:36.000 [2024-12-06 06:43:54.431313] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:36.000 06:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.000 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:36.000 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.000 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.000 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.000 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.000 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:36.000 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.000 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.000 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.000 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.000 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.000 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.000 06:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.000 06:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.000 06:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.000 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.000 "name": "raid_bdev1", 00:17:36.000 "uuid": "9721efdf-acb5-493f-9d96-f9232cc8bff3", 00:17:36.000 "strip_size_kb": 0, 00:17:36.000 "state": "online", 00:17:36.000 "raid_level": "raid1", 00:17:36.000 "superblock": true, 00:17:36.000 "num_base_bdevs": 2, 00:17:36.000 "num_base_bdevs_discovered": 1, 00:17:36.000 "num_base_bdevs_operational": 1, 00:17:36.000 "base_bdevs_list": [ 00:17:36.000 { 00:17:36.000 "name": null, 00:17:36.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.000 "is_configured": false, 00:17:36.000 "data_offset": 0, 00:17:36.000 "data_size": 63488 00:17:36.000 }, 00:17:36.000 { 00:17:36.000 "name": "BaseBdev2", 00:17:36.000 "uuid": "a7bb797c-ec69-5f73-8db9-6349f32e9ac0", 00:17:36.000 "is_configured": true, 00:17:36.000 "data_offset": 2048, 00:17:36.000 "data_size": 63488 00:17:36.000 } 00:17:36.000 ] 00:17:36.000 }' 00:17:36.000 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.000 06:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.567 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:36.567 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.567 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:36.567 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:36.567 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.567 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.567 06:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.567 06:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.567 06:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.567 06:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.567 06:43:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.567 "name": "raid_bdev1", 00:17:36.567 "uuid": "9721efdf-acb5-493f-9d96-f9232cc8bff3", 00:17:36.567 "strip_size_kb": 0, 00:17:36.567 "state": "online", 00:17:36.567 "raid_level": "raid1", 00:17:36.567 "superblock": true, 00:17:36.567 "num_base_bdevs": 2, 00:17:36.567 "num_base_bdevs_discovered": 1, 00:17:36.567 "num_base_bdevs_operational": 1, 00:17:36.567 "base_bdevs_list": [ 00:17:36.567 { 00:17:36.567 "name": null, 00:17:36.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.567 "is_configured": false, 00:17:36.567 "data_offset": 0, 00:17:36.567 "data_size": 63488 00:17:36.567 }, 00:17:36.567 { 00:17:36.567 "name": "BaseBdev2", 00:17:36.567 "uuid": "a7bb797c-ec69-5f73-8db9-6349f32e9ac0", 00:17:36.567 "is_configured": true, 00:17:36.567 "data_offset": 2048, 00:17:36.567 "data_size": 63488 00:17:36.567 } 00:17:36.567 ] 00:17:36.567 }' 00:17:36.567 06:43:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.567 06:43:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:36.567 06:43:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.567 06:43:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:36.567 06:43:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:36.567 06:43:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.567 06:43:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.567 [2024-12-06 06:43:55.116359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:36.567 [2024-12-06 06:43:55.132114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:17:36.567 06:43:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.567 06:43:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:36.567 [2024-12-06 06:43:55.134825] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:37.501 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.501 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.501 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.501 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.501 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.501 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.501 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.501 06:43:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.501 06:43:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.761 06:43:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.761 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.761 "name": "raid_bdev1", 00:17:37.761 "uuid": "9721efdf-acb5-493f-9d96-f9232cc8bff3", 00:17:37.761 "strip_size_kb": 0, 00:17:37.761 "state": "online", 00:17:37.761 "raid_level": "raid1", 00:17:37.761 "superblock": true, 00:17:37.761 "num_base_bdevs": 2, 00:17:37.761 "num_base_bdevs_discovered": 2, 00:17:37.761 "num_base_bdevs_operational": 2, 00:17:37.761 "process": { 00:17:37.761 "type": "rebuild", 00:17:37.761 "target": "spare", 00:17:37.761 "progress": { 00:17:37.761 "blocks": 20480, 00:17:37.761 "percent": 32 00:17:37.761 } 00:17:37.761 }, 00:17:37.761 "base_bdevs_list": [ 00:17:37.761 { 00:17:37.761 "name": "spare", 00:17:37.761 "uuid": "99ddc879-a88c-58d8-a9b1-939ff4dc03e5", 00:17:37.761 "is_configured": true, 00:17:37.761 "data_offset": 2048, 00:17:37.761 "data_size": 63488 00:17:37.761 }, 00:17:37.761 { 00:17:37.761 "name": "BaseBdev2", 00:17:37.761 "uuid": "a7bb797c-ec69-5f73-8db9-6349f32e9ac0", 00:17:37.761 "is_configured": true, 00:17:37.761 "data_offset": 2048, 00:17:37.761 "data_size": 63488 00:17:37.761 } 00:17:37.761 ] 00:17:37.761 }' 00:17:37.761 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.761 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:37.761 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.761 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:37.761 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:37.761 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:37.761 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:37.761 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:37.761 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:37.761 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:37.761 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=416 00:17:37.761 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:37.761 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.761 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.761 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.761 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.761 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.761 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.761 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.761 06:43:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.761 06:43:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.761 06:43:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.761 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.761 "name": "raid_bdev1", 00:17:37.761 "uuid": "9721efdf-acb5-493f-9d96-f9232cc8bff3", 00:17:37.761 "strip_size_kb": 0, 00:17:37.761 "state": "online", 00:17:37.761 "raid_level": "raid1", 00:17:37.761 "superblock": true, 00:17:37.761 "num_base_bdevs": 2, 00:17:37.761 "num_base_bdevs_discovered": 2, 00:17:37.761 "num_base_bdevs_operational": 2, 00:17:37.761 "process": { 00:17:37.761 "type": "rebuild", 00:17:37.761 "target": "spare", 00:17:37.761 "progress": { 00:17:37.761 "blocks": 22528, 00:17:37.761 "percent": 35 00:17:37.761 } 00:17:37.761 }, 00:17:37.761 "base_bdevs_list": [ 00:17:37.761 { 00:17:37.761 "name": "spare", 00:17:37.761 "uuid": "99ddc879-a88c-58d8-a9b1-939ff4dc03e5", 00:17:37.761 "is_configured": true, 00:17:37.761 "data_offset": 2048, 00:17:37.761 "data_size": 63488 00:17:37.761 }, 00:17:37.761 { 00:17:37.761 "name": "BaseBdev2", 00:17:37.761 "uuid": "a7bb797c-ec69-5f73-8db9-6349f32e9ac0", 00:17:37.761 "is_configured": true, 00:17:37.761 "data_offset": 2048, 00:17:37.761 "data_size": 63488 00:17:37.761 } 00:17:37.761 ] 00:17:37.761 }' 00:17:37.761 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.761 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:37.761 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.020 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.020 06:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:38.956 06:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:38.956 06:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:38.956 06:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.956 06:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:38.956 06:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:38.956 06:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.956 06:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.956 06:43:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.956 06:43:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.956 06:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.956 06:43:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.956 06:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.956 "name": "raid_bdev1", 00:17:38.956 "uuid": "9721efdf-acb5-493f-9d96-f9232cc8bff3", 00:17:38.956 "strip_size_kb": 0, 00:17:38.956 "state": "online", 00:17:38.956 "raid_level": "raid1", 00:17:38.956 "superblock": true, 00:17:38.956 "num_base_bdevs": 2, 00:17:38.956 "num_base_bdevs_discovered": 2, 00:17:38.956 "num_base_bdevs_operational": 2, 00:17:38.956 "process": { 00:17:38.956 "type": "rebuild", 00:17:38.956 "target": "spare", 00:17:38.956 "progress": { 00:17:38.956 "blocks": 47104, 00:17:38.956 "percent": 74 00:17:38.956 } 00:17:38.956 }, 00:17:38.956 "base_bdevs_list": [ 00:17:38.956 { 00:17:38.956 "name": "spare", 00:17:38.956 "uuid": "99ddc879-a88c-58d8-a9b1-939ff4dc03e5", 00:17:38.956 "is_configured": true, 00:17:38.956 "data_offset": 2048, 00:17:38.956 "data_size": 63488 00:17:38.956 }, 00:17:38.956 { 00:17:38.956 "name": "BaseBdev2", 00:17:38.956 "uuid": "a7bb797c-ec69-5f73-8db9-6349f32e9ac0", 00:17:38.956 "is_configured": true, 00:17:38.956 "data_offset": 2048, 00:17:38.956 "data_size": 63488 00:17:38.956 } 00:17:38.956 ] 00:17:38.956 }' 00:17:38.956 06:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.956 06:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:38.956 06:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.214 06:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:39.214 06:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:39.781 [2024-12-06 06:43:58.257431] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:39.781 [2024-12-06 06:43:58.257865] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:39.781 [2024-12-06 06:43:58.258059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.039 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:40.039 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:40.039 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.039 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:40.039 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:40.039 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.039 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.039 06:43:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.039 06:43:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.039 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.039 06:43:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.039 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.039 "name": "raid_bdev1", 00:17:40.039 "uuid": "9721efdf-acb5-493f-9d96-f9232cc8bff3", 00:17:40.039 "strip_size_kb": 0, 00:17:40.039 "state": "online", 00:17:40.039 "raid_level": "raid1", 00:17:40.039 "superblock": true, 00:17:40.039 "num_base_bdevs": 2, 00:17:40.039 "num_base_bdevs_discovered": 2, 00:17:40.039 "num_base_bdevs_operational": 2, 00:17:40.039 "base_bdevs_list": [ 00:17:40.039 { 00:17:40.039 "name": "spare", 00:17:40.039 "uuid": "99ddc879-a88c-58d8-a9b1-939ff4dc03e5", 00:17:40.039 "is_configured": true, 00:17:40.039 "data_offset": 2048, 00:17:40.039 "data_size": 63488 00:17:40.039 }, 00:17:40.039 { 00:17:40.039 "name": "BaseBdev2", 00:17:40.039 "uuid": "a7bb797c-ec69-5f73-8db9-6349f32e9ac0", 00:17:40.039 "is_configured": true, 00:17:40.039 "data_offset": 2048, 00:17:40.039 "data_size": 63488 00:17:40.039 } 00:17:40.039 ] 00:17:40.039 }' 00:17:40.039 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.298 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:40.298 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.298 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:40.298 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:40.298 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:40.298 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.298 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:40.298 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:40.298 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.298 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.298 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.298 06:43:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.298 06:43:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.298 06:43:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.298 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.298 "name": "raid_bdev1", 00:17:40.298 "uuid": "9721efdf-acb5-493f-9d96-f9232cc8bff3", 00:17:40.298 "strip_size_kb": 0, 00:17:40.298 "state": "online", 00:17:40.298 "raid_level": "raid1", 00:17:40.298 "superblock": true, 00:17:40.298 "num_base_bdevs": 2, 00:17:40.298 "num_base_bdevs_discovered": 2, 00:17:40.298 "num_base_bdevs_operational": 2, 00:17:40.298 "base_bdevs_list": [ 00:17:40.298 { 00:17:40.298 "name": "spare", 00:17:40.298 "uuid": "99ddc879-a88c-58d8-a9b1-939ff4dc03e5", 00:17:40.298 "is_configured": true, 00:17:40.298 "data_offset": 2048, 00:17:40.298 "data_size": 63488 00:17:40.298 }, 00:17:40.298 { 00:17:40.298 "name": "BaseBdev2", 00:17:40.298 "uuid": "a7bb797c-ec69-5f73-8db9-6349f32e9ac0", 00:17:40.298 "is_configured": true, 00:17:40.298 "data_offset": 2048, 00:17:40.298 "data_size": 63488 00:17:40.298 } 00:17:40.298 ] 00:17:40.298 }' 00:17:40.299 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.299 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:40.299 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.299 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:40.299 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:40.299 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.299 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.299 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:40.299 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:40.299 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:40.299 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.299 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.299 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.299 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.299 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.299 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.299 06:43:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.299 06:43:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.299 06:43:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.557 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.557 "name": "raid_bdev1", 00:17:40.557 "uuid": "9721efdf-acb5-493f-9d96-f9232cc8bff3", 00:17:40.557 "strip_size_kb": 0, 00:17:40.557 "state": "online", 00:17:40.557 "raid_level": "raid1", 00:17:40.557 "superblock": true, 00:17:40.557 "num_base_bdevs": 2, 00:17:40.557 "num_base_bdevs_discovered": 2, 00:17:40.557 "num_base_bdevs_operational": 2, 00:17:40.557 "base_bdevs_list": [ 00:17:40.557 { 00:17:40.557 "name": "spare", 00:17:40.557 "uuid": "99ddc879-a88c-58d8-a9b1-939ff4dc03e5", 00:17:40.557 "is_configured": true, 00:17:40.557 "data_offset": 2048, 00:17:40.557 "data_size": 63488 00:17:40.558 }, 00:17:40.558 { 00:17:40.558 "name": "BaseBdev2", 00:17:40.558 "uuid": "a7bb797c-ec69-5f73-8db9-6349f32e9ac0", 00:17:40.558 "is_configured": true, 00:17:40.558 "data_offset": 2048, 00:17:40.558 "data_size": 63488 00:17:40.558 } 00:17:40.558 ] 00:17:40.558 }' 00:17:40.558 06:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.558 06:43:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.817 06:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:40.817 06:43:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.817 06:43:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.818 [2024-12-06 06:43:59.426475] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:40.818 [2024-12-06 06:43:59.426515] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:40.818 [2024-12-06 06:43:59.426629] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:40.818 [2024-12-06 06:43:59.426724] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:40.818 [2024-12-06 06:43:59.426745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:40.818 06:43:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.818 06:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.818 06:43:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.818 06:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:40.818 06:43:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.818 06:43:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.077 06:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:41.077 06:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:41.077 06:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:41.077 06:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:41.077 06:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:41.077 06:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:41.077 06:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:41.077 06:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:41.077 06:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:41.077 06:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:41.077 06:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:41.077 06:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:41.077 06:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:41.336 /dev/nbd0 00:17:41.336 06:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:41.336 06:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:41.336 06:43:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:41.336 06:43:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:41.336 06:43:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:41.336 06:43:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:41.336 06:43:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:41.336 06:43:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:41.336 06:43:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:41.336 06:43:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:41.336 06:43:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:41.336 1+0 records in 00:17:41.336 1+0 records out 00:17:41.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000621881 s, 6.6 MB/s 00:17:41.336 06:43:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:41.336 06:43:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:41.336 06:43:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:41.336 06:43:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:41.336 06:43:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:41.336 06:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:41.336 06:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:41.336 06:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:41.594 /dev/nbd1 00:17:41.594 06:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:41.594 06:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:41.594 06:44:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:41.594 06:44:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:41.594 06:44:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:41.594 06:44:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:41.594 06:44:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:41.594 06:44:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:41.594 06:44:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:41.594 06:44:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:41.594 06:44:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:41.594 1+0 records in 00:17:41.594 1+0 records out 00:17:41.594 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000534284 s, 7.7 MB/s 00:17:41.594 06:44:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:41.594 06:44:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:41.594 06:44:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:41.594 06:44:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:41.594 06:44:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:41.594 06:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:41.594 06:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:41.594 06:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:41.852 06:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:41.852 06:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:41.852 06:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:41.852 06:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:41.852 06:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:41.852 06:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:41.852 06:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:42.110 06:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:42.110 06:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:42.111 06:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:42.111 06:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:42.111 06:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:42.111 06:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:42.369 06:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:42.369 06:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:42.369 06:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:42.369 06:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.628 [2024-12-06 06:44:01.087945] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:42.628 [2024-12-06 06:44:01.088145] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.628 [2024-12-06 06:44:01.088326] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:42.628 [2024-12-06 06:44:01.088354] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.628 [2024-12-06 06:44:01.091272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.628 [2024-12-06 06:44:01.091319] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:42.628 [2024-12-06 06:44:01.091437] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:42.628 [2024-12-06 06:44:01.091500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:42.628 spare 00:17:42.628 [2024-12-06 06:44:01.091703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.628 [2024-12-06 06:44:01.191844] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:42.628 [2024-12-06 06:44:01.191898] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:42.628 [2024-12-06 06:44:01.192313] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:17:42.628 [2024-12-06 06:44:01.192639] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:42.628 [2024-12-06 06:44:01.192665] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:42.628 [2024-12-06 06:44:01.192915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.628 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.628 "name": "raid_bdev1", 00:17:42.628 "uuid": "9721efdf-acb5-493f-9d96-f9232cc8bff3", 00:17:42.628 "strip_size_kb": 0, 00:17:42.628 "state": "online", 00:17:42.628 "raid_level": "raid1", 00:17:42.628 "superblock": true, 00:17:42.628 "num_base_bdevs": 2, 00:17:42.628 "num_base_bdevs_discovered": 2, 00:17:42.628 "num_base_bdevs_operational": 2, 00:17:42.628 "base_bdevs_list": [ 00:17:42.628 { 00:17:42.628 "name": "spare", 00:17:42.628 "uuid": "99ddc879-a88c-58d8-a9b1-939ff4dc03e5", 00:17:42.628 "is_configured": true, 00:17:42.629 "data_offset": 2048, 00:17:42.629 "data_size": 63488 00:17:42.629 }, 00:17:42.629 { 00:17:42.629 "name": "BaseBdev2", 00:17:42.629 "uuid": "a7bb797c-ec69-5f73-8db9-6349f32e9ac0", 00:17:42.629 "is_configured": true, 00:17:42.629 "data_offset": 2048, 00:17:42.629 "data_size": 63488 00:17:42.629 } 00:17:42.629 ] 00:17:42.629 }' 00:17:42.629 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.629 06:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.197 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:43.197 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.197 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:43.197 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:43.197 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.197 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.198 "name": "raid_bdev1", 00:17:43.198 "uuid": "9721efdf-acb5-493f-9d96-f9232cc8bff3", 00:17:43.198 "strip_size_kb": 0, 00:17:43.198 "state": "online", 00:17:43.198 "raid_level": "raid1", 00:17:43.198 "superblock": true, 00:17:43.198 "num_base_bdevs": 2, 00:17:43.198 "num_base_bdevs_discovered": 2, 00:17:43.198 "num_base_bdevs_operational": 2, 00:17:43.198 "base_bdevs_list": [ 00:17:43.198 { 00:17:43.198 "name": "spare", 00:17:43.198 "uuid": "99ddc879-a88c-58d8-a9b1-939ff4dc03e5", 00:17:43.198 "is_configured": true, 00:17:43.198 "data_offset": 2048, 00:17:43.198 "data_size": 63488 00:17:43.198 }, 00:17:43.198 { 00:17:43.198 "name": "BaseBdev2", 00:17:43.198 "uuid": "a7bb797c-ec69-5f73-8db9-6349f32e9ac0", 00:17:43.198 "is_configured": true, 00:17:43.198 "data_offset": 2048, 00:17:43.198 "data_size": 63488 00:17:43.198 } 00:17:43.198 ] 00:17:43.198 }' 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.198 [2024-12-06 06:44:01.829076] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.198 06:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.455 06:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.455 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.455 "name": "raid_bdev1", 00:17:43.455 "uuid": "9721efdf-acb5-493f-9d96-f9232cc8bff3", 00:17:43.455 "strip_size_kb": 0, 00:17:43.455 "state": "online", 00:17:43.455 "raid_level": "raid1", 00:17:43.455 "superblock": true, 00:17:43.455 "num_base_bdevs": 2, 00:17:43.455 "num_base_bdevs_discovered": 1, 00:17:43.455 "num_base_bdevs_operational": 1, 00:17:43.455 "base_bdevs_list": [ 00:17:43.455 { 00:17:43.455 "name": null, 00:17:43.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.455 "is_configured": false, 00:17:43.455 "data_offset": 0, 00:17:43.455 "data_size": 63488 00:17:43.455 }, 00:17:43.455 { 00:17:43.455 "name": "BaseBdev2", 00:17:43.455 "uuid": "a7bb797c-ec69-5f73-8db9-6349f32e9ac0", 00:17:43.455 "is_configured": true, 00:17:43.455 "data_offset": 2048, 00:17:43.455 "data_size": 63488 00:17:43.455 } 00:17:43.455 ] 00:17:43.455 }' 00:17:43.455 06:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.455 06:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.026 06:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:44.026 06:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.026 06:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.026 [2024-12-06 06:44:02.397206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:44.026 [2024-12-06 06:44:02.397456] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:44.026 [2024-12-06 06:44:02.397482] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:44.026 [2024-12-06 06:44:02.397554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:44.026 [2024-12-06 06:44:02.413094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:17:44.026 06:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.026 06:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:44.026 [2024-12-06 06:44:02.415673] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:44.963 06:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:44.963 06:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.963 06:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:44.963 06:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:44.963 06:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.963 06:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.963 06:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.963 06:44:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.963 06:44:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.963 06:44:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.963 06:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.963 "name": "raid_bdev1", 00:17:44.963 "uuid": "9721efdf-acb5-493f-9d96-f9232cc8bff3", 00:17:44.963 "strip_size_kb": 0, 00:17:44.963 "state": "online", 00:17:44.963 "raid_level": "raid1", 00:17:44.963 "superblock": true, 00:17:44.963 "num_base_bdevs": 2, 00:17:44.963 "num_base_bdevs_discovered": 2, 00:17:44.963 "num_base_bdevs_operational": 2, 00:17:44.963 "process": { 00:17:44.963 "type": "rebuild", 00:17:44.963 "target": "spare", 00:17:44.963 "progress": { 00:17:44.963 "blocks": 20480, 00:17:44.963 "percent": 32 00:17:44.963 } 00:17:44.963 }, 00:17:44.963 "base_bdevs_list": [ 00:17:44.963 { 00:17:44.963 "name": "spare", 00:17:44.963 "uuid": "99ddc879-a88c-58d8-a9b1-939ff4dc03e5", 00:17:44.963 "is_configured": true, 00:17:44.963 "data_offset": 2048, 00:17:44.963 "data_size": 63488 00:17:44.963 }, 00:17:44.963 { 00:17:44.963 "name": "BaseBdev2", 00:17:44.963 "uuid": "a7bb797c-ec69-5f73-8db9-6349f32e9ac0", 00:17:44.963 "is_configured": true, 00:17:44.963 "data_offset": 2048, 00:17:44.963 "data_size": 63488 00:17:44.963 } 00:17:44.963 ] 00:17:44.963 }' 00:17:44.963 06:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.963 06:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:44.963 06:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.963 06:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:44.963 06:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:44.963 06:44:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.963 06:44:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.963 [2024-12-06 06:44:03.577236] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:45.220 [2024-12-06 06:44:03.625178] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:45.220 [2024-12-06 06:44:03.625428] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.220 [2024-12-06 06:44:03.625457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:45.220 [2024-12-06 06:44:03.625474] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:45.220 06:44:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.220 06:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:45.220 06:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.220 06:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.220 06:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.220 06:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.220 06:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:45.220 06:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.220 06:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.220 06:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.220 06:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.220 06:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.220 06:44:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.220 06:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.220 06:44:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.220 06:44:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.221 06:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.221 "name": "raid_bdev1", 00:17:45.221 "uuid": "9721efdf-acb5-493f-9d96-f9232cc8bff3", 00:17:45.221 "strip_size_kb": 0, 00:17:45.221 "state": "online", 00:17:45.221 "raid_level": "raid1", 00:17:45.221 "superblock": true, 00:17:45.221 "num_base_bdevs": 2, 00:17:45.221 "num_base_bdevs_discovered": 1, 00:17:45.221 "num_base_bdevs_operational": 1, 00:17:45.221 "base_bdevs_list": [ 00:17:45.221 { 00:17:45.221 "name": null, 00:17:45.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.221 "is_configured": false, 00:17:45.221 "data_offset": 0, 00:17:45.221 "data_size": 63488 00:17:45.221 }, 00:17:45.221 { 00:17:45.221 "name": "BaseBdev2", 00:17:45.221 "uuid": "a7bb797c-ec69-5f73-8db9-6349f32e9ac0", 00:17:45.221 "is_configured": true, 00:17:45.221 "data_offset": 2048, 00:17:45.221 "data_size": 63488 00:17:45.221 } 00:17:45.221 ] 00:17:45.221 }' 00:17:45.221 06:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.221 06:44:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.785 06:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:45.785 06:44:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.785 06:44:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.785 [2024-12-06 06:44:04.153268] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:45.785 [2024-12-06 06:44:04.153486] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.785 [2024-12-06 06:44:04.153690] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:45.785 [2024-12-06 06:44:04.153864] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.785 [2024-12-06 06:44:04.154667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.785 [2024-12-06 06:44:04.154839] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:45.785 [2024-12-06 06:44:04.155093] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:45.785 [2024-12-06 06:44:04.155127] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:45.785 [2024-12-06 06:44:04.155143] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:45.785 [2024-12-06 06:44:04.155192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:45.785 [2024-12-06 06:44:04.171251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:17:45.785 spare 00:17:45.785 06:44:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.785 06:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:45.785 [2024-12-06 06:44:04.174103] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:46.720 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:46.720 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.720 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:46.720 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:46.720 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.720 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.720 06:44:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.720 06:44:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.720 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.720 06:44:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.720 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.720 "name": "raid_bdev1", 00:17:46.720 "uuid": "9721efdf-acb5-493f-9d96-f9232cc8bff3", 00:17:46.720 "strip_size_kb": 0, 00:17:46.720 "state": "online", 00:17:46.720 "raid_level": "raid1", 00:17:46.720 "superblock": true, 00:17:46.720 "num_base_bdevs": 2, 00:17:46.720 "num_base_bdevs_discovered": 2, 00:17:46.720 "num_base_bdevs_operational": 2, 00:17:46.720 "process": { 00:17:46.720 "type": "rebuild", 00:17:46.720 "target": "spare", 00:17:46.720 "progress": { 00:17:46.720 "blocks": 20480, 00:17:46.720 "percent": 32 00:17:46.720 } 00:17:46.720 }, 00:17:46.720 "base_bdevs_list": [ 00:17:46.720 { 00:17:46.720 "name": "spare", 00:17:46.720 "uuid": "99ddc879-a88c-58d8-a9b1-939ff4dc03e5", 00:17:46.720 "is_configured": true, 00:17:46.720 "data_offset": 2048, 00:17:46.720 "data_size": 63488 00:17:46.720 }, 00:17:46.720 { 00:17:46.720 "name": "BaseBdev2", 00:17:46.720 "uuid": "a7bb797c-ec69-5f73-8db9-6349f32e9ac0", 00:17:46.720 "is_configured": true, 00:17:46.720 "data_offset": 2048, 00:17:46.720 "data_size": 63488 00:17:46.720 } 00:17:46.720 ] 00:17:46.720 }' 00:17:46.720 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.720 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:46.720 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.978 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:46.978 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:46.978 06:44:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.978 06:44:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.978 [2024-12-06 06:44:05.375413] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:46.978 [2024-12-06 06:44:05.383294] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:46.978 [2024-12-06 06:44:05.383501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.978 [2024-12-06 06:44:05.383559] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:46.978 [2024-12-06 06:44:05.383574] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:46.978 06:44:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.978 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:46.978 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.978 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.978 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.978 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.978 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:46.978 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.978 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.978 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.978 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.978 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.978 06:44:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.978 06:44:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.978 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.978 06:44:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.978 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.978 "name": "raid_bdev1", 00:17:46.978 "uuid": "9721efdf-acb5-493f-9d96-f9232cc8bff3", 00:17:46.978 "strip_size_kb": 0, 00:17:46.978 "state": "online", 00:17:46.978 "raid_level": "raid1", 00:17:46.978 "superblock": true, 00:17:46.978 "num_base_bdevs": 2, 00:17:46.978 "num_base_bdevs_discovered": 1, 00:17:46.978 "num_base_bdevs_operational": 1, 00:17:46.978 "base_bdevs_list": [ 00:17:46.978 { 00:17:46.978 "name": null, 00:17:46.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.978 "is_configured": false, 00:17:46.978 "data_offset": 0, 00:17:46.978 "data_size": 63488 00:17:46.978 }, 00:17:46.978 { 00:17:46.978 "name": "BaseBdev2", 00:17:46.978 "uuid": "a7bb797c-ec69-5f73-8db9-6349f32e9ac0", 00:17:46.978 "is_configured": true, 00:17:46.978 "data_offset": 2048, 00:17:46.978 "data_size": 63488 00:17:46.978 } 00:17:46.978 ] 00:17:46.979 }' 00:17:46.979 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.979 06:44:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.544 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:47.544 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.544 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:47.545 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:47.545 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.545 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.545 06:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.545 06:44:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.545 06:44:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.545 06:44:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.545 06:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.545 "name": "raid_bdev1", 00:17:47.545 "uuid": "9721efdf-acb5-493f-9d96-f9232cc8bff3", 00:17:47.545 "strip_size_kb": 0, 00:17:47.545 "state": "online", 00:17:47.545 "raid_level": "raid1", 00:17:47.545 "superblock": true, 00:17:47.545 "num_base_bdevs": 2, 00:17:47.545 "num_base_bdevs_discovered": 1, 00:17:47.545 "num_base_bdevs_operational": 1, 00:17:47.545 "base_bdevs_list": [ 00:17:47.545 { 00:17:47.545 "name": null, 00:17:47.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.545 "is_configured": false, 00:17:47.545 "data_offset": 0, 00:17:47.545 "data_size": 63488 00:17:47.545 }, 00:17:47.545 { 00:17:47.545 "name": "BaseBdev2", 00:17:47.545 "uuid": "a7bb797c-ec69-5f73-8db9-6349f32e9ac0", 00:17:47.545 "is_configured": true, 00:17:47.545 "data_offset": 2048, 00:17:47.545 "data_size": 63488 00:17:47.545 } 00:17:47.545 ] 00:17:47.545 }' 00:17:47.545 06:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.545 06:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:47.545 06:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.545 06:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:47.545 06:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:47.545 06:44:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.545 06:44:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.545 06:44:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.545 06:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:47.545 06:44:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.545 06:44:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.545 [2024-12-06 06:44:06.119556] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:47.545 [2024-12-06 06:44:06.119783] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.545 [2024-12-06 06:44:06.119990] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:47.545 [2024-12-06 06:44:06.120028] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.545 [2024-12-06 06:44:06.120673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.545 [2024-12-06 06:44:06.120708] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:47.545 [2024-12-06 06:44:06.120817] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:47.545 [2024-12-06 06:44:06.120839] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:47.545 [2024-12-06 06:44:06.120855] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:47.545 [2024-12-06 06:44:06.120869] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:47.545 BaseBdev1 00:17:47.545 06:44:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.545 06:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:48.916 06:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:48.916 06:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.916 06:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.916 06:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.916 06:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.916 06:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:48.916 06:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.916 06:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.916 06:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.916 06:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.916 06:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.916 06:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.916 06:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.916 06:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.916 06:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.916 06:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.916 "name": "raid_bdev1", 00:17:48.916 "uuid": "9721efdf-acb5-493f-9d96-f9232cc8bff3", 00:17:48.916 "strip_size_kb": 0, 00:17:48.916 "state": "online", 00:17:48.916 "raid_level": "raid1", 00:17:48.916 "superblock": true, 00:17:48.916 "num_base_bdevs": 2, 00:17:48.916 "num_base_bdevs_discovered": 1, 00:17:48.916 "num_base_bdevs_operational": 1, 00:17:48.916 "base_bdevs_list": [ 00:17:48.916 { 00:17:48.916 "name": null, 00:17:48.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.916 "is_configured": false, 00:17:48.916 "data_offset": 0, 00:17:48.917 "data_size": 63488 00:17:48.917 }, 00:17:48.917 { 00:17:48.917 "name": "BaseBdev2", 00:17:48.917 "uuid": "a7bb797c-ec69-5f73-8db9-6349f32e9ac0", 00:17:48.917 "is_configured": true, 00:17:48.917 "data_offset": 2048, 00:17:48.917 "data_size": 63488 00:17:48.917 } 00:17:48.917 ] 00:17:48.917 }' 00:17:48.917 06:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.917 06:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.175 06:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:49.175 06:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.175 06:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:49.175 06:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:49.175 06:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.175 06:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.175 06:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.175 06:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.175 06:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.175 06:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.175 06:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.175 "name": "raid_bdev1", 00:17:49.175 "uuid": "9721efdf-acb5-493f-9d96-f9232cc8bff3", 00:17:49.175 "strip_size_kb": 0, 00:17:49.175 "state": "online", 00:17:49.175 "raid_level": "raid1", 00:17:49.175 "superblock": true, 00:17:49.175 "num_base_bdevs": 2, 00:17:49.175 "num_base_bdevs_discovered": 1, 00:17:49.175 "num_base_bdevs_operational": 1, 00:17:49.175 "base_bdevs_list": [ 00:17:49.175 { 00:17:49.175 "name": null, 00:17:49.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.175 "is_configured": false, 00:17:49.175 "data_offset": 0, 00:17:49.175 "data_size": 63488 00:17:49.175 }, 00:17:49.175 { 00:17:49.175 "name": "BaseBdev2", 00:17:49.175 "uuid": "a7bb797c-ec69-5f73-8db9-6349f32e9ac0", 00:17:49.175 "is_configured": true, 00:17:49.176 "data_offset": 2048, 00:17:49.176 "data_size": 63488 00:17:49.176 } 00:17:49.176 ] 00:17:49.176 }' 00:17:49.176 06:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.176 06:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:49.176 06:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.176 06:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:49.176 06:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:49.176 06:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:49.176 06:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:49.176 06:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:49.176 06:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.176 06:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:49.434 06:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.434 06:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:49.434 06:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.434 06:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.434 [2024-12-06 06:44:07.824459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:49.434 [2024-12-06 06:44:07.824855] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:49.434 [2024-12-06 06:44:07.824888] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:49.434 request: 00:17:49.434 { 00:17:49.434 "base_bdev": "BaseBdev1", 00:17:49.434 "raid_bdev": "raid_bdev1", 00:17:49.434 "method": "bdev_raid_add_base_bdev", 00:17:49.434 "req_id": 1 00:17:49.434 } 00:17:49.434 Got JSON-RPC error response 00:17:49.434 response: 00:17:49.434 { 00:17:49.434 "code": -22, 00:17:49.434 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:49.434 } 00:17:49.434 06:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:49.434 06:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:49.434 06:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:49.434 06:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:49.434 06:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:49.434 06:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:50.371 06:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:50.371 06:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.371 06:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.372 06:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.372 06:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.372 06:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:50.372 06:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.372 06:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.372 06:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.372 06:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.372 06:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.372 06:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.372 06:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.372 06:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.372 06:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.372 06:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.372 "name": "raid_bdev1", 00:17:50.372 "uuid": "9721efdf-acb5-493f-9d96-f9232cc8bff3", 00:17:50.372 "strip_size_kb": 0, 00:17:50.372 "state": "online", 00:17:50.372 "raid_level": "raid1", 00:17:50.372 "superblock": true, 00:17:50.372 "num_base_bdevs": 2, 00:17:50.372 "num_base_bdevs_discovered": 1, 00:17:50.372 "num_base_bdevs_operational": 1, 00:17:50.372 "base_bdevs_list": [ 00:17:50.372 { 00:17:50.372 "name": null, 00:17:50.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.372 "is_configured": false, 00:17:50.372 "data_offset": 0, 00:17:50.372 "data_size": 63488 00:17:50.372 }, 00:17:50.372 { 00:17:50.372 "name": "BaseBdev2", 00:17:50.372 "uuid": "a7bb797c-ec69-5f73-8db9-6349f32e9ac0", 00:17:50.372 "is_configured": true, 00:17:50.372 "data_offset": 2048, 00:17:50.372 "data_size": 63488 00:17:50.372 } 00:17:50.372 ] 00:17:50.372 }' 00:17:50.372 06:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.372 06:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.940 06:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:50.940 06:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.940 06:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:50.940 06:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:50.940 06:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.940 06:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.940 06:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.940 06:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.940 06:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.940 06:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.940 06:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.940 "name": "raid_bdev1", 00:17:50.940 "uuid": "9721efdf-acb5-493f-9d96-f9232cc8bff3", 00:17:50.940 "strip_size_kb": 0, 00:17:50.940 "state": "online", 00:17:50.940 "raid_level": "raid1", 00:17:50.940 "superblock": true, 00:17:50.940 "num_base_bdevs": 2, 00:17:50.940 "num_base_bdevs_discovered": 1, 00:17:50.940 "num_base_bdevs_operational": 1, 00:17:50.940 "base_bdevs_list": [ 00:17:50.940 { 00:17:50.940 "name": null, 00:17:50.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.940 "is_configured": false, 00:17:50.940 "data_offset": 0, 00:17:50.940 "data_size": 63488 00:17:50.940 }, 00:17:50.940 { 00:17:50.940 "name": "BaseBdev2", 00:17:50.940 "uuid": "a7bb797c-ec69-5f73-8db9-6349f32e9ac0", 00:17:50.940 "is_configured": true, 00:17:50.940 "data_offset": 2048, 00:17:50.940 "data_size": 63488 00:17:50.940 } 00:17:50.940 ] 00:17:50.940 }' 00:17:50.940 06:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.940 06:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:50.940 06:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.940 06:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:50.940 06:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76104 00:17:50.940 06:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 76104 ']' 00:17:50.940 06:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 76104 00:17:50.940 06:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:50.940 06:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:50.940 06:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76104 00:17:50.940 killing process with pid 76104 00:17:50.940 Received shutdown signal, test time was about 60.000000 seconds 00:17:50.940 00:17:50.940 Latency(us) 00:17:50.940 [2024-12-06T06:44:09.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.940 [2024-12-06T06:44:09.587Z] =================================================================================================================== 00:17:50.940 [2024-12-06T06:44:09.587Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:50.940 06:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:50.940 06:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:50.940 06:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76104' 00:17:50.940 06:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 76104 00:17:50.940 06:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 76104 00:17:50.940 [2024-12-06 06:44:09.516617] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:50.940 [2024-12-06 06:44:09.516786] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:50.940 [2024-12-06 06:44:09.516882] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:50.940 [2024-12-06 06:44:09.516912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:51.199 [2024-12-06 06:44:09.793730] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:52.590 06:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:52.590 00:17:52.590 real 0m26.950s 00:17:52.590 user 0m33.209s 00:17:52.590 sys 0m3.951s 00:17:52.590 06:44:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.590 06:44:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.590 ************************************ 00:17:52.590 END TEST raid_rebuild_test_sb 00:17:52.590 ************************************ 00:17:52.590 06:44:10 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:17:52.590 06:44:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:52.590 06:44:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.590 06:44:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:52.590 ************************************ 00:17:52.590 START TEST raid_rebuild_test_io 00:17:52.590 ************************************ 00:17:52.590 06:44:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:17:52.590 06:44:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76873 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:52.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76873 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76873 ']' 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:52.591 06:44:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:52.591 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:52.591 Zero copy mechanism will not be used. 00:17:52.591 [2024-12-06 06:44:11.038430] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:17:52.591 [2024-12-06 06:44:11.038685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76873 ] 00:17:52.591 [2024-12-06 06:44:11.224638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.849 [2024-12-06 06:44:11.358239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.107 [2024-12-06 06:44:11.564899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:53.108 [2024-12-06 06:44:11.564946] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.676 BaseBdev1_malloc 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.676 [2024-12-06 06:44:12.087069] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:53.676 [2024-12-06 06:44:12.087293] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.676 [2024-12-06 06:44:12.087372] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:53.676 [2024-12-06 06:44:12.087650] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.676 [2024-12-06 06:44:12.090505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.676 [2024-12-06 06:44:12.090692] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:53.676 BaseBdev1 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.676 BaseBdev2_malloc 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.676 [2024-12-06 06:44:12.140787] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:53.676 [2024-12-06 06:44:12.141005] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.676 [2024-12-06 06:44:12.141174] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:53.676 [2024-12-06 06:44:12.141296] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.676 [2024-12-06 06:44:12.144177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.676 [2024-12-06 06:44:12.144341] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:53.676 BaseBdev2 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.676 spare_malloc 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.676 spare_delay 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.676 [2024-12-06 06:44:12.217025] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:53.676 [2024-12-06 06:44:12.217243] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.676 [2024-12-06 06:44:12.217320] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:53.676 [2024-12-06 06:44:12.217438] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.676 [2024-12-06 06:44:12.220392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.676 [2024-12-06 06:44:12.220471] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:53.676 spare 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.676 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.676 [2024-12-06 06:44:12.225168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:53.676 [2024-12-06 06:44:12.227744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:53.677 [2024-12-06 06:44:12.228000] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:53.677 [2024-12-06 06:44:12.228067] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:53.677 [2024-12-06 06:44:12.228516] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:53.677 [2024-12-06 06:44:12.228902] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:53.677 [2024-12-06 06:44:12.229033] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:53.677 [2024-12-06 06:44:12.229478] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.677 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.677 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:53.677 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.677 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.677 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.677 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.677 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:53.677 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.677 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.677 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.677 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.677 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.677 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.677 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.677 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.677 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.677 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.677 "name": "raid_bdev1", 00:17:53.677 "uuid": "fe0e4174-62d1-4b67-948c-f220112f0ef4", 00:17:53.677 "strip_size_kb": 0, 00:17:53.677 "state": "online", 00:17:53.677 "raid_level": "raid1", 00:17:53.677 "superblock": false, 00:17:53.677 "num_base_bdevs": 2, 00:17:53.677 "num_base_bdevs_discovered": 2, 00:17:53.677 "num_base_bdevs_operational": 2, 00:17:53.677 "base_bdevs_list": [ 00:17:53.677 { 00:17:53.677 "name": "BaseBdev1", 00:17:53.677 "uuid": "1f3c402b-7295-5a46-bba2-56c54395dbab", 00:17:53.677 "is_configured": true, 00:17:53.677 "data_offset": 0, 00:17:53.677 "data_size": 65536 00:17:53.677 }, 00:17:53.677 { 00:17:53.677 "name": "BaseBdev2", 00:17:53.677 "uuid": "0bbf9dc7-ba28-5685-87d4-4695413abdfb", 00:17:53.677 "is_configured": true, 00:17:53.677 "data_offset": 0, 00:17:53.677 "data_size": 65536 00:17:53.677 } 00:17:53.677 ] 00:17:53.677 }' 00:17:53.677 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.677 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.245 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:54.245 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.245 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.245 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:54.245 [2024-12-06 06:44:12.734017] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:54.245 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.245 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:17:54.245 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:54.245 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.245 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.245 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.245 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.245 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:54.245 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:17:54.245 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:54.245 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:54.245 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.245 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.245 [2024-12-06 06:44:12.829676] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:54.245 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.245 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:54.245 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.245 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.246 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.246 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.246 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:54.246 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.246 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.246 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.246 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.246 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.246 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.246 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.246 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.246 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.246 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.246 "name": "raid_bdev1", 00:17:54.246 "uuid": "fe0e4174-62d1-4b67-948c-f220112f0ef4", 00:17:54.246 "strip_size_kb": 0, 00:17:54.246 "state": "online", 00:17:54.246 "raid_level": "raid1", 00:17:54.246 "superblock": false, 00:17:54.246 "num_base_bdevs": 2, 00:17:54.246 "num_base_bdevs_discovered": 1, 00:17:54.246 "num_base_bdevs_operational": 1, 00:17:54.246 "base_bdevs_list": [ 00:17:54.246 { 00:17:54.246 "name": null, 00:17:54.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.246 "is_configured": false, 00:17:54.246 "data_offset": 0, 00:17:54.246 "data_size": 65536 00:17:54.246 }, 00:17:54.246 { 00:17:54.246 "name": "BaseBdev2", 00:17:54.246 "uuid": "0bbf9dc7-ba28-5685-87d4-4695413abdfb", 00:17:54.246 "is_configured": true, 00:17:54.246 "data_offset": 0, 00:17:54.246 "data_size": 65536 00:17:54.246 } 00:17:54.246 ] 00:17:54.246 }' 00:17:54.246 06:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.246 06:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.505 [2024-12-06 06:44:12.938093] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:54.505 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:54.505 Zero copy mechanism will not be used. 00:17:54.505 Running I/O for 60 seconds... 00:17:54.764 06:44:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:54.764 06:44:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.764 06:44:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.764 [2024-12-06 06:44:13.323620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:54.764 06:44:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.764 06:44:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:54.764 [2024-12-06 06:44:13.390094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:54.764 [2024-12-06 06:44:13.392913] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:55.023 [2024-12-06 06:44:13.502885] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:55.024 [2024-12-06 06:44:13.503792] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:55.024 [2024-12-06 06:44:13.640185] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:55.024 [2024-12-06 06:44:13.640828] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:55.282 [2024-12-06 06:44:13.882963] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:55.555 157.00 IOPS, 471.00 MiB/s [2024-12-06T06:44:14.202Z] [2024-12-06 06:44:14.013968] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:55.555 [2024-12-06 06:44:14.022338] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:55.813 06:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:55.813 06:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.813 06:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:55.813 06:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:55.813 06:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.813 06:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.813 06:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.813 06:44:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.813 06:44:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:55.813 06:44:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.813 06:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.813 "name": "raid_bdev1", 00:17:55.813 "uuid": "fe0e4174-62d1-4b67-948c-f220112f0ef4", 00:17:55.813 "strip_size_kb": 0, 00:17:55.813 "state": "online", 00:17:55.813 "raid_level": "raid1", 00:17:55.813 "superblock": false, 00:17:55.813 "num_base_bdevs": 2, 00:17:55.813 "num_base_bdevs_discovered": 2, 00:17:55.813 "num_base_bdevs_operational": 2, 00:17:55.813 "process": { 00:17:55.813 "type": "rebuild", 00:17:55.813 "target": "spare", 00:17:55.813 "progress": { 00:17:55.813 "blocks": 14336, 00:17:55.813 "percent": 21 00:17:55.813 } 00:17:55.813 }, 00:17:55.813 "base_bdevs_list": [ 00:17:55.813 { 00:17:55.813 "name": "spare", 00:17:55.813 "uuid": "6d4d82d1-c92f-5974-804c-a54599ed2045", 00:17:55.813 "is_configured": true, 00:17:55.813 "data_offset": 0, 00:17:55.813 "data_size": 65536 00:17:55.813 }, 00:17:55.813 { 00:17:55.813 "name": "BaseBdev2", 00:17:55.813 "uuid": "0bbf9dc7-ba28-5685-87d4-4695413abdfb", 00:17:55.813 "is_configured": true, 00:17:55.813 "data_offset": 0, 00:17:55.813 "data_size": 65536 00:17:55.813 } 00:17:55.813 ] 00:17:55.813 }' 00:17:55.813 06:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.071 [2024-12-06 06:44:14.467309] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:56.072 [2024-12-06 06:44:14.467901] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:56.072 06:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:56.072 06:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.072 06:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:56.072 06:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:56.072 06:44:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.072 06:44:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:56.072 [2024-12-06 06:44:14.537976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:56.072 [2024-12-06 06:44:14.707451] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:56.330 [2024-12-06 06:44:14.718328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.330 [2024-12-06 06:44:14.718387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:56.330 [2024-12-06 06:44:14.718406] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:56.330 [2024-12-06 06:44:14.779227] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:17:56.330 06:44:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.330 06:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:56.330 06:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.330 06:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.330 06:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.330 06:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.330 06:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:56.330 06:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.330 06:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.330 06:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.330 06:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.330 06:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.331 06:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.331 06:44:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.331 06:44:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:56.331 06:44:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.331 06:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.331 "name": "raid_bdev1", 00:17:56.331 "uuid": "fe0e4174-62d1-4b67-948c-f220112f0ef4", 00:17:56.331 "strip_size_kb": 0, 00:17:56.331 "state": "online", 00:17:56.331 "raid_level": "raid1", 00:17:56.331 "superblock": false, 00:17:56.331 "num_base_bdevs": 2, 00:17:56.331 "num_base_bdevs_discovered": 1, 00:17:56.331 "num_base_bdevs_operational": 1, 00:17:56.331 "base_bdevs_list": [ 00:17:56.331 { 00:17:56.331 "name": null, 00:17:56.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.331 "is_configured": false, 00:17:56.331 "data_offset": 0, 00:17:56.331 "data_size": 65536 00:17:56.331 }, 00:17:56.331 { 00:17:56.331 "name": "BaseBdev2", 00:17:56.331 "uuid": "0bbf9dc7-ba28-5685-87d4-4695413abdfb", 00:17:56.331 "is_configured": true, 00:17:56.331 "data_offset": 0, 00:17:56.331 "data_size": 65536 00:17:56.331 } 00:17:56.331 ] 00:17:56.331 }' 00:17:56.331 06:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.331 06:44:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:56.899 129.00 IOPS, 387.00 MiB/s [2024-12-06T06:44:15.546Z] 06:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:56.899 06:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.899 06:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:56.899 06:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:56.899 06:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.899 06:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.899 06:44:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.899 06:44:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:56.899 06:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.899 06:44:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.899 06:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.899 "name": "raid_bdev1", 00:17:56.899 "uuid": "fe0e4174-62d1-4b67-948c-f220112f0ef4", 00:17:56.899 "strip_size_kb": 0, 00:17:56.899 "state": "online", 00:17:56.899 "raid_level": "raid1", 00:17:56.899 "superblock": false, 00:17:56.899 "num_base_bdevs": 2, 00:17:56.899 "num_base_bdevs_discovered": 1, 00:17:56.899 "num_base_bdevs_operational": 1, 00:17:56.899 "base_bdevs_list": [ 00:17:56.899 { 00:17:56.899 "name": null, 00:17:56.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.899 "is_configured": false, 00:17:56.899 "data_offset": 0, 00:17:56.899 "data_size": 65536 00:17:56.899 }, 00:17:56.899 { 00:17:56.899 "name": "BaseBdev2", 00:17:56.899 "uuid": "0bbf9dc7-ba28-5685-87d4-4695413abdfb", 00:17:56.899 "is_configured": true, 00:17:56.899 "data_offset": 0, 00:17:56.899 "data_size": 65536 00:17:56.899 } 00:17:56.899 ] 00:17:56.899 }' 00:17:56.899 06:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.899 06:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:56.899 06:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.899 06:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:56.900 06:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:56.900 06:44:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.900 06:44:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:56.900 [2024-12-06 06:44:15.487017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:56.900 06:44:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.900 06:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:56.900 [2024-12-06 06:44:15.541863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:57.158 [2024-12-06 06:44:15.544449] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:57.158 [2024-12-06 06:44:15.670680] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:57.158 [2024-12-06 06:44:15.671391] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:57.417 [2024-12-06 06:44:15.891281] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:57.417 [2024-12-06 06:44:15.891968] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:57.676 148.00 IOPS, 444.00 MiB/s [2024-12-06T06:44:16.323Z] [2024-12-06 06:44:16.256820] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:57.935 [2024-12-06 06:44:16.384336] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:57.935 [2024-12-06 06:44:16.384746] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:57.935 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:57.935 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:57.935 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:57.935 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:57.935 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:57.935 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.935 06:44:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.935 06:44:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:57.935 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.935 06:44:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.935 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:57.935 "name": "raid_bdev1", 00:17:57.935 "uuid": "fe0e4174-62d1-4b67-948c-f220112f0ef4", 00:17:57.935 "strip_size_kb": 0, 00:17:57.935 "state": "online", 00:17:57.935 "raid_level": "raid1", 00:17:57.935 "superblock": false, 00:17:57.935 "num_base_bdevs": 2, 00:17:57.935 "num_base_bdevs_discovered": 2, 00:17:57.935 "num_base_bdevs_operational": 2, 00:17:57.935 "process": { 00:17:57.935 "type": "rebuild", 00:17:57.935 "target": "spare", 00:17:57.935 "progress": { 00:17:57.935 "blocks": 12288, 00:17:57.935 "percent": 18 00:17:57.935 } 00:17:57.935 }, 00:17:57.935 "base_bdevs_list": [ 00:17:57.935 { 00:17:57.935 "name": "spare", 00:17:57.935 "uuid": "6d4d82d1-c92f-5974-804c-a54599ed2045", 00:17:57.935 "is_configured": true, 00:17:57.935 "data_offset": 0, 00:17:57.935 "data_size": 65536 00:17:57.935 }, 00:17:57.935 { 00:17:57.935 "name": "BaseBdev2", 00:17:57.935 "uuid": "0bbf9dc7-ba28-5685-87d4-4695413abdfb", 00:17:57.935 "is_configured": true, 00:17:57.935 "data_offset": 0, 00:17:57.935 "data_size": 65536 00:17:57.935 } 00:17:57.935 ] 00:17:57.935 }' 00:17:57.935 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.194 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:58.195 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.195 [2024-12-06 06:44:16.641674] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:58.195 [2024-12-06 06:44:16.642222] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:58.195 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:58.195 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:58.195 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:58.195 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:58.195 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:58.195 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=436 00:17:58.195 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:58.195 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:58.195 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.195 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:58.195 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:58.195 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.195 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.195 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.195 06:44:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.195 06:44:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:58.195 06:44:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.195 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.195 "name": "raid_bdev1", 00:17:58.195 "uuid": "fe0e4174-62d1-4b67-948c-f220112f0ef4", 00:17:58.195 "strip_size_kb": 0, 00:17:58.195 "state": "online", 00:17:58.195 "raid_level": "raid1", 00:17:58.195 "superblock": false, 00:17:58.195 "num_base_bdevs": 2, 00:17:58.195 "num_base_bdevs_discovered": 2, 00:17:58.195 "num_base_bdevs_operational": 2, 00:17:58.195 "process": { 00:17:58.195 "type": "rebuild", 00:17:58.195 "target": "spare", 00:17:58.195 "progress": { 00:17:58.195 "blocks": 14336, 00:17:58.195 "percent": 21 00:17:58.195 } 00:17:58.195 }, 00:17:58.195 "base_bdevs_list": [ 00:17:58.195 { 00:17:58.195 "name": "spare", 00:17:58.195 "uuid": "6d4d82d1-c92f-5974-804c-a54599ed2045", 00:17:58.195 "is_configured": true, 00:17:58.195 "data_offset": 0, 00:17:58.195 "data_size": 65536 00:17:58.195 }, 00:17:58.195 { 00:17:58.195 "name": "BaseBdev2", 00:17:58.195 "uuid": "0bbf9dc7-ba28-5685-87d4-4695413abdfb", 00:17:58.195 "is_configured": true, 00:17:58.195 "data_offset": 0, 00:17:58.195 "data_size": 65536 00:17:58.195 } 00:17:58.195 ] 00:17:58.195 }' 00:17:58.195 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.195 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:58.195 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.459 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:58.459 06:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:58.459 [2024-12-06 06:44:16.879913] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:58.716 129.25 IOPS, 387.75 MiB/s [2024-12-06T06:44:17.363Z] [2024-12-06 06:44:17.219263] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:17:58.974 [2024-12-06 06:44:17.455313] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:17:59.232 06:44:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:59.232 06:44:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:59.232 06:44:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.232 06:44:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:59.232 06:44:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:59.232 06:44:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.232 06:44:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.232 06:44:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.232 06:44:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.232 06:44:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.489 06:44:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.489 06:44:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.489 "name": "raid_bdev1", 00:17:59.489 "uuid": "fe0e4174-62d1-4b67-948c-f220112f0ef4", 00:17:59.489 "strip_size_kb": 0, 00:17:59.489 "state": "online", 00:17:59.489 "raid_level": "raid1", 00:17:59.489 "superblock": false, 00:17:59.489 "num_base_bdevs": 2, 00:17:59.489 "num_base_bdevs_discovered": 2, 00:17:59.489 "num_base_bdevs_operational": 2, 00:17:59.489 "process": { 00:17:59.489 "type": "rebuild", 00:17:59.489 "target": "spare", 00:17:59.489 "progress": { 00:17:59.489 "blocks": 28672, 00:17:59.489 "percent": 43 00:17:59.489 } 00:17:59.489 }, 00:17:59.489 "base_bdevs_list": [ 00:17:59.489 { 00:17:59.489 "name": "spare", 00:17:59.489 "uuid": "6d4d82d1-c92f-5974-804c-a54599ed2045", 00:17:59.489 "is_configured": true, 00:17:59.489 "data_offset": 0, 00:17:59.489 "data_size": 65536 00:17:59.489 }, 00:17:59.489 { 00:17:59.489 "name": "BaseBdev2", 00:17:59.489 "uuid": "0bbf9dc7-ba28-5685-87d4-4695413abdfb", 00:17:59.489 "is_configured": true, 00:17:59.489 "data_offset": 0, 00:17:59.489 "data_size": 65536 00:17:59.489 } 00:17:59.489 ] 00:17:59.489 }' 00:17:59.489 06:44:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.489 116.00 IOPS, 348.00 MiB/s [2024-12-06T06:44:18.136Z] 06:44:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:59.489 06:44:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.490 06:44:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:59.490 06:44:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:59.490 [2024-12-06 06:44:18.025245] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:17:59.748 [2024-12-06 06:44:18.235803] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:18:00.007 [2024-12-06 06:44:18.467389] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:18:00.007 [2024-12-06 06:44:18.577029] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:18:00.007 [2024-12-06 06:44:18.577406] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:18:00.574 105.17 IOPS, 315.50 MiB/s [2024-12-06T06:44:19.221Z] 06:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:00.574 06:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.574 06:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.574 06:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:00.574 06:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:00.574 06:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.574 06:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.574 06:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.574 06:44:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.574 06:44:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:00.574 [2024-12-06 06:44:19.035156] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:18:00.574 06:44:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.574 06:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.574 "name": "raid_bdev1", 00:18:00.574 "uuid": "fe0e4174-62d1-4b67-948c-f220112f0ef4", 00:18:00.574 "strip_size_kb": 0, 00:18:00.574 "state": "online", 00:18:00.574 "raid_level": "raid1", 00:18:00.574 "superblock": false, 00:18:00.574 "num_base_bdevs": 2, 00:18:00.574 "num_base_bdevs_discovered": 2, 00:18:00.574 "num_base_bdevs_operational": 2, 00:18:00.574 "process": { 00:18:00.574 "type": "rebuild", 00:18:00.574 "target": "spare", 00:18:00.574 "progress": { 00:18:00.574 "blocks": 45056, 00:18:00.574 "percent": 68 00:18:00.574 } 00:18:00.574 }, 00:18:00.574 "base_bdevs_list": [ 00:18:00.574 { 00:18:00.574 "name": "spare", 00:18:00.574 "uuid": "6d4d82d1-c92f-5974-804c-a54599ed2045", 00:18:00.574 "is_configured": true, 00:18:00.574 "data_offset": 0, 00:18:00.574 "data_size": 65536 00:18:00.574 }, 00:18:00.574 { 00:18:00.574 "name": "BaseBdev2", 00:18:00.574 "uuid": "0bbf9dc7-ba28-5685-87d4-4695413abdfb", 00:18:00.574 "is_configured": true, 00:18:00.574 "data_offset": 0, 00:18:00.574 "data_size": 65536 00:18:00.574 } 00:18:00.574 ] 00:18:00.574 }' 00:18:00.574 06:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.574 06:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.574 06:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.574 06:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.574 06:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:00.833 [2024-12-06 06:44:19.365572] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:18:01.401 94.57 IOPS, 283.71 MiB/s [2024-12-06T06:44:20.048Z] [2024-12-06 06:44:20.031386] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:01.661 [2024-12-06 06:44:20.139225] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:01.661 [2024-12-06 06:44:20.141713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.661 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:01.661 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:01.661 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.661 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:01.661 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:01.661 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.661 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.661 06:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.661 06:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:01.661 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.661 06:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.661 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.661 "name": "raid_bdev1", 00:18:01.661 "uuid": "fe0e4174-62d1-4b67-948c-f220112f0ef4", 00:18:01.661 "strip_size_kb": 0, 00:18:01.661 "state": "online", 00:18:01.661 "raid_level": "raid1", 00:18:01.661 "superblock": false, 00:18:01.661 "num_base_bdevs": 2, 00:18:01.661 "num_base_bdevs_discovered": 2, 00:18:01.661 "num_base_bdevs_operational": 2, 00:18:01.661 "base_bdevs_list": [ 00:18:01.661 { 00:18:01.661 "name": "spare", 00:18:01.661 "uuid": "6d4d82d1-c92f-5974-804c-a54599ed2045", 00:18:01.661 "is_configured": true, 00:18:01.661 "data_offset": 0, 00:18:01.661 "data_size": 65536 00:18:01.661 }, 00:18:01.661 { 00:18:01.661 "name": "BaseBdev2", 00:18:01.661 "uuid": "0bbf9dc7-ba28-5685-87d4-4695413abdfb", 00:18:01.661 "is_configured": true, 00:18:01.661 "data_offset": 0, 00:18:01.661 "data_size": 65536 00:18:01.661 } 00:18:01.661 ] 00:18:01.661 }' 00:18:01.661 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.661 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:01.661 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.920 "name": "raid_bdev1", 00:18:01.920 "uuid": "fe0e4174-62d1-4b67-948c-f220112f0ef4", 00:18:01.920 "strip_size_kb": 0, 00:18:01.920 "state": "online", 00:18:01.920 "raid_level": "raid1", 00:18:01.920 "superblock": false, 00:18:01.920 "num_base_bdevs": 2, 00:18:01.920 "num_base_bdevs_discovered": 2, 00:18:01.920 "num_base_bdevs_operational": 2, 00:18:01.920 "base_bdevs_list": [ 00:18:01.920 { 00:18:01.920 "name": "spare", 00:18:01.920 "uuid": "6d4d82d1-c92f-5974-804c-a54599ed2045", 00:18:01.920 "is_configured": true, 00:18:01.920 "data_offset": 0, 00:18:01.920 "data_size": 65536 00:18:01.920 }, 00:18:01.920 { 00:18:01.920 "name": "BaseBdev2", 00:18:01.920 "uuid": "0bbf9dc7-ba28-5685-87d4-4695413abdfb", 00:18:01.920 "is_configured": true, 00:18:01.920 "data_offset": 0, 00:18:01.920 "data_size": 65536 00:18:01.920 } 00:18:01.920 ] 00:18:01.920 }' 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.920 "name": "raid_bdev1", 00:18:01.920 "uuid": "fe0e4174-62d1-4b67-948c-f220112f0ef4", 00:18:01.920 "strip_size_kb": 0, 00:18:01.920 "state": "online", 00:18:01.920 "raid_level": "raid1", 00:18:01.920 "superblock": false, 00:18:01.920 "num_base_bdevs": 2, 00:18:01.920 "num_base_bdevs_discovered": 2, 00:18:01.920 "num_base_bdevs_operational": 2, 00:18:01.920 "base_bdevs_list": [ 00:18:01.920 { 00:18:01.920 "name": "spare", 00:18:01.920 "uuid": "6d4d82d1-c92f-5974-804c-a54599ed2045", 00:18:01.920 "is_configured": true, 00:18:01.920 "data_offset": 0, 00:18:01.920 "data_size": 65536 00:18:01.920 }, 00:18:01.920 { 00:18:01.920 "name": "BaseBdev2", 00:18:01.920 "uuid": "0bbf9dc7-ba28-5685-87d4-4695413abdfb", 00:18:01.920 "is_configured": true, 00:18:01.920 "data_offset": 0, 00:18:01.920 "data_size": 65536 00:18:01.920 } 00:18:01.920 ] 00:18:01.920 }' 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.920 06:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.485 87.00 IOPS, 261.00 MiB/s [2024-12-06T06:44:21.132Z] 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:02.485 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.485 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.485 [2024-12-06 06:44:21.019543] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:02.485 [2024-12-06 06:44:21.019581] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:02.485 00:18:02.485 Latency(us) 00:18:02.485 [2024-12-06T06:44:21.132Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.485 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:18:02.485 raid_bdev1 : 8.18 85.47 256.40 0.00 0.00 14703.93 297.89 111053.73 00:18:02.485 [2024-12-06T06:44:21.132Z] =================================================================================================================== 00:18:02.485 [2024-12-06T06:44:21.132Z] Total : 85.47 256.40 0.00 0.00 14703.93 297.89 111053.73 00:18:02.743 [2024-12-06 06:44:21.140135] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.743 [2024-12-06 06:44:21.140219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.743 [2024-12-06 06:44:21.140331] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:02.743 [2024-12-06 06:44:21.140350] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:02.743 { 00:18:02.743 "results": [ 00:18:02.743 { 00:18:02.743 "job": "raid_bdev1", 00:18:02.743 "core_mask": "0x1", 00:18:02.743 "workload": "randrw", 00:18:02.743 "percentage": 50, 00:18:02.743 "status": "finished", 00:18:02.743 "queue_depth": 2, 00:18:02.743 "io_size": 3145728, 00:18:02.743 "runtime": 8.178588, 00:18:02.743 "iops": 85.46707573483344, 00:18:02.743 "mibps": 256.40122720450034, 00:18:02.743 "io_failed": 0, 00:18:02.743 "io_timeout": 0, 00:18:02.743 "avg_latency_us": 14703.932828716348, 00:18:02.743 "min_latency_us": 297.8909090909091, 00:18:02.743 "max_latency_us": 111053.73090909091 00:18:02.743 } 00:18:02.743 ], 00:18:02.743 "core_count": 1 00:18:02.743 } 00:18:02.743 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.743 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.743 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:18:02.743 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.743 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.743 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.743 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:02.743 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:02.743 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:18:02.743 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:18:02.743 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:02.743 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:18:02.743 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:02.743 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:02.743 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:02.743 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:18:02.743 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:02.743 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:02.743 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:18:03.002 /dev/nbd0 00:18:03.002 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:03.002 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:03.002 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:03.002 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:18:03.002 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:03.002 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:03.002 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:03.002 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:18:03.002 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:03.002 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:03.002 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:03.002 1+0 records in 00:18:03.002 1+0 records out 00:18:03.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038003 s, 10.8 MB/s 00:18:03.002 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.002 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:18:03.002 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.002 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:03.002 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:18:03.002 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:03.002 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:03.002 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:03.002 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:18:03.002 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:18:03.002 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:03.002 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:18:03.002 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:03.002 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:03.002 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:03.002 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:18:03.002 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:03.002 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:03.002 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:18:03.259 /dev/nbd1 00:18:03.259 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:03.259 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:03.259 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:03.259 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:18:03.259 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:03.259 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:03.259 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:03.259 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:18:03.259 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:03.259 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:03.259 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:03.259 1+0 records in 00:18:03.259 1+0 records out 00:18:03.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303761 s, 13.5 MB/s 00:18:03.259 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.259 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:18:03.259 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.259 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:03.259 06:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:18:03.259 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:03.259 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:03.259 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:03.516 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:03.516 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:03.516 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:03.516 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:03.516 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:18:03.516 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:03.516 06:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:03.774 06:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:03.774 06:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:03.774 06:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:03.774 06:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:03.774 06:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:03.774 06:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:03.774 06:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:18:03.774 06:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:03.774 06:44:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:03.774 06:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:03.774 06:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:03.774 06:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:03.774 06:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:18:03.774 06:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:03.774 06:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:04.032 06:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:04.032 06:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:04.032 06:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:04.032 06:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:04.032 06:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:04.032 06:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:04.032 06:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:18:04.032 06:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:04.032 06:44:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:04.032 06:44:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76873 00:18:04.032 06:44:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76873 ']' 00:18:04.032 06:44:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76873 00:18:04.032 06:44:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:18:04.032 06:44:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.032 06:44:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76873 00:18:04.032 06:44:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:04.032 06:44:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:04.032 killing process with pid 76873 00:18:04.032 06:44:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76873' 00:18:04.032 06:44:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76873 00:18:04.032 Received shutdown signal, test time was about 9.616316 seconds 00:18:04.032 00:18:04.032 Latency(us) 00:18:04.032 [2024-12-06T06:44:22.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.032 [2024-12-06T06:44:22.679Z] =================================================================================================================== 00:18:04.032 [2024-12-06T06:44:22.679Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:04.032 [2024-12-06 06:44:22.557195] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:04.032 06:44:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76873 00:18:04.311 [2024-12-06 06:44:22.761855] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:05.266 06:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:18:05.266 00:18:05.266 real 0m12.954s 00:18:05.266 user 0m16.899s 00:18:05.266 sys 0m1.402s 00:18:05.266 ************************************ 00:18:05.266 END TEST raid_rebuild_test_io 00:18:05.266 ************************************ 00:18:05.266 06:44:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:05.266 06:44:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:05.524 06:44:23 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:18:05.524 06:44:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:05.524 06:44:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:05.524 06:44:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:05.524 ************************************ 00:18:05.524 START TEST raid_rebuild_test_sb_io 00:18:05.524 ************************************ 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77249 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77249 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77249 ']' 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.524 06:44:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:05.524 [2024-12-06 06:44:24.047147] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:18:05.524 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:05.524 Zero copy mechanism will not be used. 00:18:05.524 [2024-12-06 06:44:24.047419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77249 ] 00:18:05.781 [2024-12-06 06:44:24.243959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.781 [2024-12-06 06:44:24.391935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.039 [2024-12-06 06:44:24.598245] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:06.039 [2024-12-06 06:44:24.598309] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:06.606 06:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.606 06:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:18:06.606 06:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:06.606 06:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:06.606 06:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.606 06:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:06.606 BaseBdev1_malloc 00:18:06.606 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.606 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:06.606 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.606 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:06.606 [2024-12-06 06:44:25.047320] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:06.606 [2024-12-06 06:44:25.047395] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.606 [2024-12-06 06:44:25.047427] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:06.606 [2024-12-06 06:44:25.047447] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.606 [2024-12-06 06:44:25.050165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.606 [2024-12-06 06:44:25.050216] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:06.606 BaseBdev1 00:18:06.606 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.606 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:06.606 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:06.606 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.606 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:06.606 BaseBdev2_malloc 00:18:06.606 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.606 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:06.606 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.606 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:06.606 [2024-12-06 06:44:25.103435] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:06.606 [2024-12-06 06:44:25.103553] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.606 [2024-12-06 06:44:25.103589] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:06.606 [2024-12-06 06:44:25.103608] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.606 [2024-12-06 06:44:25.106421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.606 [2024-12-06 06:44:25.106499] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:06.606 BaseBdev2 00:18:06.606 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.606 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:06.606 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.606 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:06.606 spare_malloc 00:18:06.606 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.606 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:06.606 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.606 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:06.607 spare_delay 00:18:06.607 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.607 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:06.607 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.607 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:06.607 [2024-12-06 06:44:25.184636] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:06.607 [2024-12-06 06:44:25.184706] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.607 [2024-12-06 06:44:25.184736] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:06.607 [2024-12-06 06:44:25.184755] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.607 [2024-12-06 06:44:25.187502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.607 [2024-12-06 06:44:25.187567] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:06.607 spare 00:18:06.607 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.607 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:06.607 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.607 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:06.607 [2024-12-06 06:44:25.192707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:06.607 [2024-12-06 06:44:25.195108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:06.607 [2024-12-06 06:44:25.195356] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:06.607 [2024-12-06 06:44:25.195380] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:06.607 [2024-12-06 06:44:25.195699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:06.607 [2024-12-06 06:44:25.195929] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:06.607 [2024-12-06 06:44:25.195944] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:06.607 [2024-12-06 06:44:25.196128] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.607 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.608 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:06.608 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.608 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.608 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.608 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.608 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:06.608 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.608 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.608 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.608 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.608 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.608 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.608 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.608 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:06.608 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.867 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.867 "name": "raid_bdev1", 00:18:06.867 "uuid": "2dd1415d-aef0-4e02-9739-72e6d6c7ea10", 00:18:06.867 "strip_size_kb": 0, 00:18:06.867 "state": "online", 00:18:06.867 "raid_level": "raid1", 00:18:06.867 "superblock": true, 00:18:06.867 "num_base_bdevs": 2, 00:18:06.867 "num_base_bdevs_discovered": 2, 00:18:06.867 "num_base_bdevs_operational": 2, 00:18:06.867 "base_bdevs_list": [ 00:18:06.867 { 00:18:06.867 "name": "BaseBdev1", 00:18:06.867 "uuid": "22bbd9e5-a776-5b04-a7d4-e33f8dd4412c", 00:18:06.867 "is_configured": true, 00:18:06.867 "data_offset": 2048, 00:18:06.867 "data_size": 63488 00:18:06.867 }, 00:18:06.867 { 00:18:06.867 "name": "BaseBdev2", 00:18:06.867 "uuid": "56b94c9b-a122-501f-b395-7a9257ea2230", 00:18:06.867 "is_configured": true, 00:18:06.867 "data_offset": 2048, 00:18:06.867 "data_size": 63488 00:18:06.867 } 00:18:06.867 ] 00:18:06.867 }' 00:18:06.867 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.867 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:07.124 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:07.124 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:07.124 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.124 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:07.125 [2024-12-06 06:44:25.721248] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:07.125 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:07.382 [2024-12-06 06:44:25.828878] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.382 "name": "raid_bdev1", 00:18:07.382 "uuid": "2dd1415d-aef0-4e02-9739-72e6d6c7ea10", 00:18:07.382 "strip_size_kb": 0, 00:18:07.382 "state": "online", 00:18:07.382 "raid_level": "raid1", 00:18:07.382 "superblock": true, 00:18:07.382 "num_base_bdevs": 2, 00:18:07.382 "num_base_bdevs_discovered": 1, 00:18:07.382 "num_base_bdevs_operational": 1, 00:18:07.382 "base_bdevs_list": [ 00:18:07.382 { 00:18:07.382 "name": null, 00:18:07.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.382 "is_configured": false, 00:18:07.382 "data_offset": 0, 00:18:07.382 "data_size": 63488 00:18:07.382 }, 00:18:07.382 { 00:18:07.382 "name": "BaseBdev2", 00:18:07.382 "uuid": "56b94c9b-a122-501f-b395-7a9257ea2230", 00:18:07.382 "is_configured": true, 00:18:07.382 "data_offset": 2048, 00:18:07.382 "data_size": 63488 00:18:07.382 } 00:18:07.382 ] 00:18:07.382 }' 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.382 06:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:07.382 [2024-12-06 06:44:25.936816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:07.382 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:07.382 Zero copy mechanism will not be used. 00:18:07.382 Running I/O for 60 seconds... 00:18:07.949 06:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:07.949 06:44:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.949 06:44:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:07.949 [2024-12-06 06:44:26.379784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:07.949 06:44:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.949 06:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:07.949 [2024-12-06 06:44:26.425199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:07.949 [2024-12-06 06:44:26.427723] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:07.949 [2024-12-06 06:44:26.552335] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:07.949 [2024-12-06 06:44:26.553026] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:08.208 [2024-12-06 06:44:26.782097] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:08.208 [2024-12-06 06:44:26.782435] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:08.467 171.00 IOPS, 513.00 MiB/s [2024-12-06T06:44:27.114Z] [2024-12-06 06:44:27.100162] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:08.726 [2024-12-06 06:44:27.227292] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:08.985 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:08.985 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.985 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:08.985 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:08.985 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.985 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.985 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.985 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.985 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:08.985 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.985 [2024-12-06 06:44:27.459386] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:08.985 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.985 "name": "raid_bdev1", 00:18:08.985 "uuid": "2dd1415d-aef0-4e02-9739-72e6d6c7ea10", 00:18:08.985 "strip_size_kb": 0, 00:18:08.985 "state": "online", 00:18:08.985 "raid_level": "raid1", 00:18:08.985 "superblock": true, 00:18:08.985 "num_base_bdevs": 2, 00:18:08.985 "num_base_bdevs_discovered": 2, 00:18:08.985 "num_base_bdevs_operational": 2, 00:18:08.985 "process": { 00:18:08.985 "type": "rebuild", 00:18:08.985 "target": "spare", 00:18:08.985 "progress": { 00:18:08.985 "blocks": 12288, 00:18:08.985 "percent": 19 00:18:08.985 } 00:18:08.985 }, 00:18:08.985 "base_bdevs_list": [ 00:18:08.985 { 00:18:08.985 "name": "spare", 00:18:08.985 "uuid": "1da0bba3-d704-5f89-824c-4dca993d8d51", 00:18:08.985 "is_configured": true, 00:18:08.985 "data_offset": 2048, 00:18:08.985 "data_size": 63488 00:18:08.985 }, 00:18:08.985 { 00:18:08.985 "name": "BaseBdev2", 00:18:08.985 "uuid": "56b94c9b-a122-501f-b395-7a9257ea2230", 00:18:08.985 "is_configured": true, 00:18:08.985 "data_offset": 2048, 00:18:08.985 "data_size": 63488 00:18:08.985 } 00:18:08.985 ] 00:18:08.985 }' 00:18:08.985 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.985 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:08.985 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.985 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:08.985 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:08.985 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.985 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:08.985 [2024-12-06 06:44:27.572366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:09.244 [2024-12-06 06:44:27.672315] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:09.244 [2024-12-06 06:44:27.672671] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:09.244 [2024-12-06 06:44:27.774576] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:09.244 [2024-12-06 06:44:27.793031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.244 [2024-12-06 06:44:27.793092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:09.244 [2024-12-06 06:44:27.793110] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:09.244 [2024-12-06 06:44:27.836345] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:18:09.244 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.244 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:09.244 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.244 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.244 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.244 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.244 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:09.244 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.244 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.244 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.244 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.244 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.244 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.244 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.244 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:09.502 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.502 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.502 "name": "raid_bdev1", 00:18:09.502 "uuid": "2dd1415d-aef0-4e02-9739-72e6d6c7ea10", 00:18:09.503 "strip_size_kb": 0, 00:18:09.503 "state": "online", 00:18:09.503 "raid_level": "raid1", 00:18:09.503 "superblock": true, 00:18:09.503 "num_base_bdevs": 2, 00:18:09.503 "num_base_bdevs_discovered": 1, 00:18:09.503 "num_base_bdevs_operational": 1, 00:18:09.503 "base_bdevs_list": [ 00:18:09.503 { 00:18:09.503 "name": null, 00:18:09.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.503 "is_configured": false, 00:18:09.503 "data_offset": 0, 00:18:09.503 "data_size": 63488 00:18:09.503 }, 00:18:09.503 { 00:18:09.503 "name": "BaseBdev2", 00:18:09.503 "uuid": "56b94c9b-a122-501f-b395-7a9257ea2230", 00:18:09.503 "is_configured": true, 00:18:09.503 "data_offset": 2048, 00:18:09.503 "data_size": 63488 00:18:09.503 } 00:18:09.503 ] 00:18:09.503 }' 00:18:09.503 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.503 06:44:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:09.761 131.50 IOPS, 394.50 MiB/s [2024-12-06T06:44:28.408Z] 06:44:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:09.761 06:44:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.761 06:44:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:09.761 06:44:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:09.761 06:44:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.761 06:44:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.761 06:44:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.761 06:44:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.761 06:44:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:09.761 06:44:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.761 06:44:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.761 "name": "raid_bdev1", 00:18:09.761 "uuid": "2dd1415d-aef0-4e02-9739-72e6d6c7ea10", 00:18:09.761 "strip_size_kb": 0, 00:18:09.761 "state": "online", 00:18:09.761 "raid_level": "raid1", 00:18:09.761 "superblock": true, 00:18:09.761 "num_base_bdevs": 2, 00:18:09.761 "num_base_bdevs_discovered": 1, 00:18:09.761 "num_base_bdevs_operational": 1, 00:18:09.761 "base_bdevs_list": [ 00:18:09.761 { 00:18:09.761 "name": null, 00:18:09.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.761 "is_configured": false, 00:18:09.761 "data_offset": 0, 00:18:09.761 "data_size": 63488 00:18:09.761 }, 00:18:09.761 { 00:18:09.761 "name": "BaseBdev2", 00:18:09.761 "uuid": "56b94c9b-a122-501f-b395-7a9257ea2230", 00:18:09.761 "is_configured": true, 00:18:09.761 "data_offset": 2048, 00:18:09.761 "data_size": 63488 00:18:09.761 } 00:18:09.761 ] 00:18:09.761 }' 00:18:09.761 06:44:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.019 06:44:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:10.019 06:44:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.019 06:44:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:10.019 06:44:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:10.019 06:44:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.019 06:44:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:10.019 [2024-12-06 06:44:28.497894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:10.019 06:44:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.019 06:44:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:10.019 [2024-12-06 06:44:28.575747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:10.019 [2024-12-06 06:44:28.578322] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:10.278 [2024-12-06 06:44:28.711208] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:10.278 [2024-12-06 06:44:28.840360] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:10.278 [2024-12-06 06:44:28.840792] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:10.796 151.33 IOPS, 454.00 MiB/s [2024-12-06T06:44:29.443Z] [2024-12-06 06:44:29.192931] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:10.796 [2024-12-06 06:44:29.412944] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.055 "name": "raid_bdev1", 00:18:11.055 "uuid": "2dd1415d-aef0-4e02-9739-72e6d6c7ea10", 00:18:11.055 "strip_size_kb": 0, 00:18:11.055 "state": "online", 00:18:11.055 "raid_level": "raid1", 00:18:11.055 "superblock": true, 00:18:11.055 "num_base_bdevs": 2, 00:18:11.055 "num_base_bdevs_discovered": 2, 00:18:11.055 "num_base_bdevs_operational": 2, 00:18:11.055 "process": { 00:18:11.055 "type": "rebuild", 00:18:11.055 "target": "spare", 00:18:11.055 "progress": { 00:18:11.055 "blocks": 10240, 00:18:11.055 "percent": 16 00:18:11.055 } 00:18:11.055 }, 00:18:11.055 "base_bdevs_list": [ 00:18:11.055 { 00:18:11.055 "name": "spare", 00:18:11.055 "uuid": "1da0bba3-d704-5f89-824c-4dca993d8d51", 00:18:11.055 "is_configured": true, 00:18:11.055 "data_offset": 2048, 00:18:11.055 "data_size": 63488 00:18:11.055 }, 00:18:11.055 { 00:18:11.055 "name": "BaseBdev2", 00:18:11.055 "uuid": "56b94c9b-a122-501f-b395-7a9257ea2230", 00:18:11.055 "is_configured": true, 00:18:11.055 "data_offset": 2048, 00:18:11.055 "data_size": 63488 00:18:11.055 } 00:18:11.055 ] 00:18:11.055 }' 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:11.055 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=449 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.055 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:11.314 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.314 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.314 "name": "raid_bdev1", 00:18:11.314 "uuid": "2dd1415d-aef0-4e02-9739-72e6d6c7ea10", 00:18:11.314 "strip_size_kb": 0, 00:18:11.314 "state": "online", 00:18:11.314 "raid_level": "raid1", 00:18:11.314 "superblock": true, 00:18:11.314 "num_base_bdevs": 2, 00:18:11.314 "num_base_bdevs_discovered": 2, 00:18:11.314 "num_base_bdevs_operational": 2, 00:18:11.314 "process": { 00:18:11.314 "type": "rebuild", 00:18:11.314 "target": "spare", 00:18:11.314 "progress": { 00:18:11.314 "blocks": 12288, 00:18:11.314 "percent": 19 00:18:11.314 } 00:18:11.314 }, 00:18:11.314 "base_bdevs_list": [ 00:18:11.314 { 00:18:11.314 "name": "spare", 00:18:11.314 "uuid": "1da0bba3-d704-5f89-824c-4dca993d8d51", 00:18:11.314 "is_configured": true, 00:18:11.314 "data_offset": 2048, 00:18:11.314 "data_size": 63488 00:18:11.314 }, 00:18:11.314 { 00:18:11.314 "name": "BaseBdev2", 00:18:11.314 "uuid": "56b94c9b-a122-501f-b395-7a9257ea2230", 00:18:11.314 "is_configured": true, 00:18:11.314 "data_offset": 2048, 00:18:11.314 "data_size": 63488 00:18:11.314 } 00:18:11.314 ] 00:18:11.314 }' 00:18:11.314 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.314 [2024-12-06 06:44:29.755425] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:11.314 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:11.314 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.314 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:11.314 06:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:11.573 135.25 IOPS, 405.75 MiB/s [2024-12-06T06:44:30.220Z] [2024-12-06 06:44:29.994019] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:11.833 [2024-12-06 06:44:30.399759] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:12.400 06:44:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:12.400 06:44:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:12.400 06:44:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.400 06:44:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:12.400 06:44:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:12.400 06:44:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.400 06:44:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.400 06:44:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.400 06:44:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.400 06:44:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:12.400 06:44:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.400 06:44:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.400 "name": "raid_bdev1", 00:18:12.400 "uuid": "2dd1415d-aef0-4e02-9739-72e6d6c7ea10", 00:18:12.400 "strip_size_kb": 0, 00:18:12.400 "state": "online", 00:18:12.400 "raid_level": "raid1", 00:18:12.400 "superblock": true, 00:18:12.400 "num_base_bdevs": 2, 00:18:12.400 "num_base_bdevs_discovered": 2, 00:18:12.400 "num_base_bdevs_operational": 2, 00:18:12.400 "process": { 00:18:12.400 "type": "rebuild", 00:18:12.400 "target": "spare", 00:18:12.400 "progress": { 00:18:12.400 "blocks": 30720, 00:18:12.400 "percent": 48 00:18:12.400 } 00:18:12.400 }, 00:18:12.400 "base_bdevs_list": [ 00:18:12.400 { 00:18:12.400 "name": "spare", 00:18:12.400 "uuid": "1da0bba3-d704-5f89-824c-4dca993d8d51", 00:18:12.400 "is_configured": true, 00:18:12.400 "data_offset": 2048, 00:18:12.400 "data_size": 63488 00:18:12.400 }, 00:18:12.400 { 00:18:12.400 "name": "BaseBdev2", 00:18:12.400 "uuid": "56b94c9b-a122-501f-b395-7a9257ea2230", 00:18:12.400 "is_configured": true, 00:18:12.400 "data_offset": 2048, 00:18:12.400 "data_size": 63488 00:18:12.400 } 00:18:12.400 ] 00:18:12.400 }' 00:18:12.400 06:44:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.400 120.60 IOPS, 361.80 MiB/s [2024-12-06T06:44:31.047Z] 06:44:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:12.400 06:44:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.400 [2024-12-06 06:44:31.026063] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:18:12.400 [2024-12-06 06:44:31.026436] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:18:12.400 06:44:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:12.400 06:44:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:12.969 [2024-12-06 06:44:31.389833] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:18:13.227 [2024-12-06 06:44:31.754040] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:18:13.486 108.50 IOPS, 325.50 MiB/s [2024-12-06T06:44:32.133Z] 06:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:13.486 06:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:13.486 06:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.486 06:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:13.486 06:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:13.486 06:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.486 06:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.486 06:44:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.486 06:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.486 06:44:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:13.486 06:44:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.486 06:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.486 "name": "raid_bdev1", 00:18:13.486 "uuid": "2dd1415d-aef0-4e02-9739-72e6d6c7ea10", 00:18:13.486 "strip_size_kb": 0, 00:18:13.486 "state": "online", 00:18:13.486 "raid_level": "raid1", 00:18:13.486 "superblock": true, 00:18:13.486 "num_base_bdevs": 2, 00:18:13.486 "num_base_bdevs_discovered": 2, 00:18:13.486 "num_base_bdevs_operational": 2, 00:18:13.486 "process": { 00:18:13.486 "type": "rebuild", 00:18:13.486 "target": "spare", 00:18:13.486 "progress": { 00:18:13.486 "blocks": 51200, 00:18:13.486 "percent": 80 00:18:13.486 } 00:18:13.486 }, 00:18:13.486 "base_bdevs_list": [ 00:18:13.486 { 00:18:13.486 "name": "spare", 00:18:13.486 "uuid": "1da0bba3-d704-5f89-824c-4dca993d8d51", 00:18:13.486 "is_configured": true, 00:18:13.486 "data_offset": 2048, 00:18:13.486 "data_size": 63488 00:18:13.486 }, 00:18:13.486 { 00:18:13.486 "name": "BaseBdev2", 00:18:13.486 "uuid": "56b94c9b-a122-501f-b395-7a9257ea2230", 00:18:13.486 "is_configured": true, 00:18:13.486 "data_offset": 2048, 00:18:13.486 "data_size": 63488 00:18:13.486 } 00:18:13.486 ] 00:18:13.486 }' 00:18:13.486 06:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.486 [2024-12-06 06:44:32.114960] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:18:13.746 06:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:13.746 06:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.746 06:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:13.746 06:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:13.746 [2024-12-06 06:44:32.347479] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:18:14.315 [2024-12-06 06:44:32.684303] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:14.315 [2024-12-06 06:44:32.792510] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:14.315 [2024-12-06 06:44:32.795085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.574 98.57 IOPS, 295.71 MiB/s [2024-12-06T06:44:33.221Z] 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:14.574 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:14.574 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.574 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:14.574 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:14.574 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.574 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.574 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.574 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.574 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:14.834 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.834 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.834 "name": "raid_bdev1", 00:18:14.834 "uuid": "2dd1415d-aef0-4e02-9739-72e6d6c7ea10", 00:18:14.834 "strip_size_kb": 0, 00:18:14.834 "state": "online", 00:18:14.834 "raid_level": "raid1", 00:18:14.834 "superblock": true, 00:18:14.834 "num_base_bdevs": 2, 00:18:14.834 "num_base_bdevs_discovered": 2, 00:18:14.834 "num_base_bdevs_operational": 2, 00:18:14.834 "base_bdevs_list": [ 00:18:14.834 { 00:18:14.834 "name": "spare", 00:18:14.834 "uuid": "1da0bba3-d704-5f89-824c-4dca993d8d51", 00:18:14.834 "is_configured": true, 00:18:14.834 "data_offset": 2048, 00:18:14.834 "data_size": 63488 00:18:14.834 }, 00:18:14.834 { 00:18:14.834 "name": "BaseBdev2", 00:18:14.834 "uuid": "56b94c9b-a122-501f-b395-7a9257ea2230", 00:18:14.834 "is_configured": true, 00:18:14.834 "data_offset": 2048, 00:18:14.834 "data_size": 63488 00:18:14.834 } 00:18:14.834 ] 00:18:14.834 }' 00:18:14.834 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.834 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:14.834 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.834 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:14.834 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:18:14.834 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:14.834 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.834 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:14.834 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:14.834 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.834 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.834 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.834 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:14.834 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.834 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.834 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.834 "name": "raid_bdev1", 00:18:14.834 "uuid": "2dd1415d-aef0-4e02-9739-72e6d6c7ea10", 00:18:14.834 "strip_size_kb": 0, 00:18:14.834 "state": "online", 00:18:14.834 "raid_level": "raid1", 00:18:14.834 "superblock": true, 00:18:14.834 "num_base_bdevs": 2, 00:18:14.834 "num_base_bdevs_discovered": 2, 00:18:14.834 "num_base_bdevs_operational": 2, 00:18:14.834 "base_bdevs_list": [ 00:18:14.834 { 00:18:14.834 "name": "spare", 00:18:14.834 "uuid": "1da0bba3-d704-5f89-824c-4dca993d8d51", 00:18:14.834 "is_configured": true, 00:18:14.834 "data_offset": 2048, 00:18:14.834 "data_size": 63488 00:18:14.834 }, 00:18:14.834 { 00:18:14.834 "name": "BaseBdev2", 00:18:14.834 "uuid": "56b94c9b-a122-501f-b395-7a9257ea2230", 00:18:14.834 "is_configured": true, 00:18:14.834 "data_offset": 2048, 00:18:14.834 "data_size": 63488 00:18:14.834 } 00:18:14.834 ] 00:18:14.834 }' 00:18:14.834 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.094 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:15.094 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.094 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:15.094 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:15.094 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:15.094 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.094 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.094 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.094 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:15.094 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.094 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.094 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.094 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.094 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.094 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.094 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.094 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:15.094 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.094 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.094 "name": "raid_bdev1", 00:18:15.094 "uuid": "2dd1415d-aef0-4e02-9739-72e6d6c7ea10", 00:18:15.094 "strip_size_kb": 0, 00:18:15.094 "state": "online", 00:18:15.094 "raid_level": "raid1", 00:18:15.094 "superblock": true, 00:18:15.094 "num_base_bdevs": 2, 00:18:15.094 "num_base_bdevs_discovered": 2, 00:18:15.094 "num_base_bdevs_operational": 2, 00:18:15.094 "base_bdevs_list": [ 00:18:15.094 { 00:18:15.094 "name": "spare", 00:18:15.094 "uuid": "1da0bba3-d704-5f89-824c-4dca993d8d51", 00:18:15.094 "is_configured": true, 00:18:15.094 "data_offset": 2048, 00:18:15.094 "data_size": 63488 00:18:15.094 }, 00:18:15.094 { 00:18:15.094 "name": "BaseBdev2", 00:18:15.094 "uuid": "56b94c9b-a122-501f-b395-7a9257ea2230", 00:18:15.094 "is_configured": true, 00:18:15.094 "data_offset": 2048, 00:18:15.094 "data_size": 63488 00:18:15.094 } 00:18:15.094 ] 00:18:15.094 }' 00:18:15.094 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.094 06:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:15.615 89.88 IOPS, 269.62 MiB/s [2024-12-06T06:44:34.262Z] 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:15.615 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.615 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:15.615 [2024-12-06 06:44:34.046171] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:15.615 [2024-12-06 06:44:34.046212] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:15.615 00:18:15.615 Latency(us) 00:18:15.615 [2024-12-06T06:44:34.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.615 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:18:15.615 raid_bdev1 : 8.20 88.27 264.80 0.00 0.00 16015.29 277.41 118203.11 00:18:15.615 [2024-12-06T06:44:34.262Z] =================================================================================================================== 00:18:15.615 [2024-12-06T06:44:34.262Z] Total : 88.27 264.80 0.00 0.00 16015.29 277.41 118203.11 00:18:15.615 [2024-12-06 06:44:34.162847] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:15.615 [2024-12-06 06:44:34.162940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.615 [2024-12-06 06:44:34.163057] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:15.615 [2024-12-06 06:44:34.163093] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:15.615 { 00:18:15.615 "results": [ 00:18:15.615 { 00:18:15.615 "job": "raid_bdev1", 00:18:15.615 "core_mask": "0x1", 00:18:15.615 "workload": "randrw", 00:18:15.615 "percentage": 50, 00:18:15.615 "status": "finished", 00:18:15.615 "queue_depth": 2, 00:18:15.615 "io_size": 3145728, 00:18:15.615 "runtime": 8.202556, 00:18:15.615 "iops": 88.26517002748899, 00:18:15.615 "mibps": 264.795510082467, 00:18:15.615 "io_failed": 0, 00:18:15.615 "io_timeout": 0, 00:18:15.615 "avg_latency_us": 16015.287995981922, 00:18:15.615 "min_latency_us": 277.4109090909091, 00:18:15.615 "max_latency_us": 118203.11272727273 00:18:15.615 } 00:18:15.615 ], 00:18:15.615 "core_count": 1 00:18:15.615 } 00:18:15.615 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.615 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.615 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.615 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:15.615 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:18:15.615 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.615 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:15.615 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:15.615 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:18:15.615 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:18:15.615 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:15.615 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:18:15.615 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:15.615 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:15.615 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:15.615 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:15.615 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:15.615 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:15.615 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:18:16.185 /dev/nbd0 00:18:16.185 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:16.185 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:16.185 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:16.185 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:16.185 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:16.185 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:16.185 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:16.185 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:16.185 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:16.185 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:16.185 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:16.185 1+0 records in 00:18:16.185 1+0 records out 00:18:16.185 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337206 s, 12.1 MB/s 00:18:16.185 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:16.185 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:16.185 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:16.185 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:16.185 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:16.185 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:16.185 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:16.185 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:16.185 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:18:16.185 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:18:16.185 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:16.185 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:18:16.185 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:16.185 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:16.185 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:16.185 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:16.185 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:16.185 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:16.185 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:18:16.445 /dev/nbd1 00:18:16.445 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:16.445 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:16.445 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:16.445 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:16.445 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:16.445 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:16.445 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:16.445 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:16.445 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:16.445 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:16.445 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:16.445 1+0 records in 00:18:16.445 1+0 records out 00:18:16.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419742 s, 9.8 MB/s 00:18:16.445 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:16.445 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:16.445 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:16.445 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:16.445 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:16.445 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:16.445 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:16.445 06:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:16.703 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:16.703 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:16.703 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:16.703 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:16.703 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:16.703 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:16.703 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:16.962 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:16.962 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:16.962 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:16.962 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:16.962 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:16.962 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:16.962 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:16.962 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:16.962 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:16.962 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:16.962 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:16.962 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:16.962 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:16.962 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:16.962 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:17.220 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:17.221 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:17.221 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:17.221 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:17.221 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:17.221 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:17.221 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:17.221 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:17.221 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:17.221 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:17.221 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.221 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.221 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.221 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:17.221 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.221 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.221 [2024-12-06 06:44:35.805227] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:17.221 [2024-12-06 06:44:35.805325] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.221 [2024-12-06 06:44:35.805358] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:18:17.221 [2024-12-06 06:44:35.805391] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.221 [2024-12-06 06:44:35.808555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.221 [2024-12-06 06:44:35.808620] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:17.221 [2024-12-06 06:44:35.808742] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:17.221 [2024-12-06 06:44:35.808841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:17.221 [2024-12-06 06:44:35.809027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:17.221 spare 00:18:17.221 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.221 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:17.221 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.221 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.570 [2024-12-06 06:44:35.909156] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:17.570 [2024-12-06 06:44:35.909190] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:17.570 [2024-12-06 06:44:35.909582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:18:17.570 [2024-12-06 06:44:35.909792] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:17.570 [2024-12-06 06:44:35.909822] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:17.570 [2024-12-06 06:44:35.910057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.570 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.570 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:17.570 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:17.570 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.570 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.570 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.570 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:17.570 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.570 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.570 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.570 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.570 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.570 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.570 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.570 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.570 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.570 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.570 "name": "raid_bdev1", 00:18:17.570 "uuid": "2dd1415d-aef0-4e02-9739-72e6d6c7ea10", 00:18:17.570 "strip_size_kb": 0, 00:18:17.570 "state": "online", 00:18:17.570 "raid_level": "raid1", 00:18:17.570 "superblock": true, 00:18:17.570 "num_base_bdevs": 2, 00:18:17.570 "num_base_bdevs_discovered": 2, 00:18:17.570 "num_base_bdevs_operational": 2, 00:18:17.570 "base_bdevs_list": [ 00:18:17.570 { 00:18:17.570 "name": "spare", 00:18:17.571 "uuid": "1da0bba3-d704-5f89-824c-4dca993d8d51", 00:18:17.571 "is_configured": true, 00:18:17.571 "data_offset": 2048, 00:18:17.571 "data_size": 63488 00:18:17.571 }, 00:18:17.571 { 00:18:17.571 "name": "BaseBdev2", 00:18:17.571 "uuid": "56b94c9b-a122-501f-b395-7a9257ea2230", 00:18:17.571 "is_configured": true, 00:18:17.571 "data_offset": 2048, 00:18:17.571 "data_size": 63488 00:18:17.571 } 00:18:17.571 ] 00:18:17.571 }' 00:18:17.571 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.571 06:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.845 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:17.845 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.845 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:17.845 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:17.845 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.845 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.845 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.845 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.845 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.845 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.845 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.845 "name": "raid_bdev1", 00:18:17.845 "uuid": "2dd1415d-aef0-4e02-9739-72e6d6c7ea10", 00:18:17.845 "strip_size_kb": 0, 00:18:17.845 "state": "online", 00:18:17.845 "raid_level": "raid1", 00:18:17.845 "superblock": true, 00:18:17.845 "num_base_bdevs": 2, 00:18:17.845 "num_base_bdevs_discovered": 2, 00:18:17.845 "num_base_bdevs_operational": 2, 00:18:17.845 "base_bdevs_list": [ 00:18:17.845 { 00:18:17.845 "name": "spare", 00:18:17.845 "uuid": "1da0bba3-d704-5f89-824c-4dca993d8d51", 00:18:17.845 "is_configured": true, 00:18:17.845 "data_offset": 2048, 00:18:17.845 "data_size": 63488 00:18:17.845 }, 00:18:17.845 { 00:18:17.845 "name": "BaseBdev2", 00:18:17.845 "uuid": "56b94c9b-a122-501f-b395-7a9257ea2230", 00:18:17.845 "is_configured": true, 00:18:17.845 "data_offset": 2048, 00:18:17.845 "data_size": 63488 00:18:17.845 } 00:18:17.845 ] 00:18:17.845 }' 00:18:17.846 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.104 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:18.104 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.104 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:18.104 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.104 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.104 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:18.104 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:18.104 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.104 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:18.104 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:18.104 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.104 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:18.104 [2024-12-06 06:44:36.638372] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:18.104 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.104 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:18.104 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.105 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.105 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.105 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.105 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:18.105 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.105 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.105 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.105 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.105 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.105 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.105 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:18.105 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.105 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.105 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.105 "name": "raid_bdev1", 00:18:18.105 "uuid": "2dd1415d-aef0-4e02-9739-72e6d6c7ea10", 00:18:18.105 "strip_size_kb": 0, 00:18:18.105 "state": "online", 00:18:18.105 "raid_level": "raid1", 00:18:18.105 "superblock": true, 00:18:18.105 "num_base_bdevs": 2, 00:18:18.105 "num_base_bdevs_discovered": 1, 00:18:18.105 "num_base_bdevs_operational": 1, 00:18:18.105 "base_bdevs_list": [ 00:18:18.105 { 00:18:18.105 "name": null, 00:18:18.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.105 "is_configured": false, 00:18:18.105 "data_offset": 0, 00:18:18.105 "data_size": 63488 00:18:18.105 }, 00:18:18.105 { 00:18:18.105 "name": "BaseBdev2", 00:18:18.105 "uuid": "56b94c9b-a122-501f-b395-7a9257ea2230", 00:18:18.105 "is_configured": true, 00:18:18.105 "data_offset": 2048, 00:18:18.105 "data_size": 63488 00:18:18.105 } 00:18:18.105 ] 00:18:18.105 }' 00:18:18.105 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.105 06:44:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:18.671 06:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:18.671 06:44:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.671 06:44:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:18.671 [2024-12-06 06:44:37.186735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:18.671 [2024-12-06 06:44:37.187025] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:18.671 [2024-12-06 06:44:37.187046] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:18.671 [2024-12-06 06:44:37.187101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:18.671 [2024-12-06 06:44:37.203503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:18:18.671 06:44:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.671 06:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:18.671 [2024-12-06 06:44:37.206068] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:19.607 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:19.607 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.607 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:19.607 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:19.607 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.607 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.607 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.607 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.607 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:19.607 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.866 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.866 "name": "raid_bdev1", 00:18:19.866 "uuid": "2dd1415d-aef0-4e02-9739-72e6d6c7ea10", 00:18:19.866 "strip_size_kb": 0, 00:18:19.866 "state": "online", 00:18:19.866 "raid_level": "raid1", 00:18:19.866 "superblock": true, 00:18:19.866 "num_base_bdevs": 2, 00:18:19.866 "num_base_bdevs_discovered": 2, 00:18:19.866 "num_base_bdevs_operational": 2, 00:18:19.866 "process": { 00:18:19.866 "type": "rebuild", 00:18:19.866 "target": "spare", 00:18:19.866 "progress": { 00:18:19.866 "blocks": 20480, 00:18:19.866 "percent": 32 00:18:19.866 } 00:18:19.866 }, 00:18:19.866 "base_bdevs_list": [ 00:18:19.866 { 00:18:19.866 "name": "spare", 00:18:19.866 "uuid": "1da0bba3-d704-5f89-824c-4dca993d8d51", 00:18:19.866 "is_configured": true, 00:18:19.866 "data_offset": 2048, 00:18:19.866 "data_size": 63488 00:18:19.866 }, 00:18:19.866 { 00:18:19.866 "name": "BaseBdev2", 00:18:19.866 "uuid": "56b94c9b-a122-501f-b395-7a9257ea2230", 00:18:19.866 "is_configured": true, 00:18:19.866 "data_offset": 2048, 00:18:19.866 "data_size": 63488 00:18:19.866 } 00:18:19.866 ] 00:18:19.866 }' 00:18:19.866 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.866 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:19.866 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.866 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:19.866 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:19.866 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.867 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:19.867 [2024-12-06 06:44:38.383619] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:19.867 [2024-12-06 06:44:38.415452] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:19.867 [2024-12-06 06:44:38.415554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.867 [2024-12-06 06:44:38.415583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:19.867 [2024-12-06 06:44:38.415594] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:19.867 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.867 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:19.867 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.867 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.867 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.867 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.867 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:19.867 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.867 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.867 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.867 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.867 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.867 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.867 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.867 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:19.867 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.867 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.867 "name": "raid_bdev1", 00:18:19.867 "uuid": "2dd1415d-aef0-4e02-9739-72e6d6c7ea10", 00:18:19.867 "strip_size_kb": 0, 00:18:19.867 "state": "online", 00:18:19.867 "raid_level": "raid1", 00:18:19.867 "superblock": true, 00:18:19.867 "num_base_bdevs": 2, 00:18:19.867 "num_base_bdevs_discovered": 1, 00:18:19.867 "num_base_bdevs_operational": 1, 00:18:19.867 "base_bdevs_list": [ 00:18:19.867 { 00:18:19.867 "name": null, 00:18:19.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.867 "is_configured": false, 00:18:19.867 "data_offset": 0, 00:18:19.867 "data_size": 63488 00:18:19.867 }, 00:18:19.867 { 00:18:19.867 "name": "BaseBdev2", 00:18:19.867 "uuid": "56b94c9b-a122-501f-b395-7a9257ea2230", 00:18:19.867 "is_configured": true, 00:18:19.867 "data_offset": 2048, 00:18:19.867 "data_size": 63488 00:18:19.867 } 00:18:19.867 ] 00:18:19.867 }' 00:18:19.867 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.867 06:44:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:20.435 06:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:20.435 06:44:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.435 06:44:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:20.435 [2024-12-06 06:44:39.035984] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:20.435 [2024-12-06 06:44:39.036113] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.435 [2024-12-06 06:44:39.036154] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:20.435 [2024-12-06 06:44:39.036170] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.435 [2024-12-06 06:44:39.036887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.435 [2024-12-06 06:44:39.036923] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:20.435 [2024-12-06 06:44:39.037055] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:20.435 [2024-12-06 06:44:39.037075] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:20.435 [2024-12-06 06:44:39.037092] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:20.435 [2024-12-06 06:44:39.037125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:20.435 [2024-12-06 06:44:39.053861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:18:20.435 spare 00:18:20.435 06:44:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.435 06:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:20.435 [2024-12-06 06:44:39.056444] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:21.811 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.811 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.811 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:21.811 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:21.811 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.811 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.811 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.811 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:21.811 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.811 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.811 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.811 "name": "raid_bdev1", 00:18:21.811 "uuid": "2dd1415d-aef0-4e02-9739-72e6d6c7ea10", 00:18:21.811 "strip_size_kb": 0, 00:18:21.811 "state": "online", 00:18:21.811 "raid_level": "raid1", 00:18:21.811 "superblock": true, 00:18:21.811 "num_base_bdevs": 2, 00:18:21.811 "num_base_bdevs_discovered": 2, 00:18:21.811 "num_base_bdevs_operational": 2, 00:18:21.811 "process": { 00:18:21.811 "type": "rebuild", 00:18:21.811 "target": "spare", 00:18:21.811 "progress": { 00:18:21.811 "blocks": 20480, 00:18:21.811 "percent": 32 00:18:21.811 } 00:18:21.811 }, 00:18:21.811 "base_bdevs_list": [ 00:18:21.811 { 00:18:21.811 "name": "spare", 00:18:21.811 "uuid": "1da0bba3-d704-5f89-824c-4dca993d8d51", 00:18:21.811 "is_configured": true, 00:18:21.811 "data_offset": 2048, 00:18:21.811 "data_size": 63488 00:18:21.811 }, 00:18:21.811 { 00:18:21.811 "name": "BaseBdev2", 00:18:21.811 "uuid": "56b94c9b-a122-501f-b395-7a9257ea2230", 00:18:21.811 "is_configured": true, 00:18:21.811 "data_offset": 2048, 00:18:21.812 "data_size": 63488 00:18:21.812 } 00:18:21.812 ] 00:18:21.812 }' 00:18:21.812 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.812 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:21.812 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.812 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.812 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:21.812 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.812 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:21.812 [2024-12-06 06:44:40.233965] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:21.812 [2024-12-06 06:44:40.265882] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:21.812 [2024-12-06 06:44:40.265974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.812 [2024-12-06 06:44:40.265999] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:21.812 [2024-12-06 06:44:40.266013] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:21.812 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.812 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:21.812 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.812 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.812 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.812 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.812 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:21.812 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.812 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.812 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.812 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.812 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.812 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.812 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.812 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:21.812 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.812 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.812 "name": "raid_bdev1", 00:18:21.812 "uuid": "2dd1415d-aef0-4e02-9739-72e6d6c7ea10", 00:18:21.812 "strip_size_kb": 0, 00:18:21.812 "state": "online", 00:18:21.812 "raid_level": "raid1", 00:18:21.812 "superblock": true, 00:18:21.812 "num_base_bdevs": 2, 00:18:21.812 "num_base_bdevs_discovered": 1, 00:18:21.812 "num_base_bdevs_operational": 1, 00:18:21.812 "base_bdevs_list": [ 00:18:21.812 { 00:18:21.812 "name": null, 00:18:21.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.812 "is_configured": false, 00:18:21.812 "data_offset": 0, 00:18:21.812 "data_size": 63488 00:18:21.812 }, 00:18:21.812 { 00:18:21.812 "name": "BaseBdev2", 00:18:21.812 "uuid": "56b94c9b-a122-501f-b395-7a9257ea2230", 00:18:21.812 "is_configured": true, 00:18:21.812 "data_offset": 2048, 00:18:21.812 "data_size": 63488 00:18:21.812 } 00:18:21.812 ] 00:18:21.812 }' 00:18:21.812 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.812 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:22.379 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:22.379 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.379 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:22.379 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:22.379 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.379 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.379 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.379 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.379 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:22.379 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.379 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.379 "name": "raid_bdev1", 00:18:22.379 "uuid": "2dd1415d-aef0-4e02-9739-72e6d6c7ea10", 00:18:22.379 "strip_size_kb": 0, 00:18:22.379 "state": "online", 00:18:22.379 "raid_level": "raid1", 00:18:22.379 "superblock": true, 00:18:22.379 "num_base_bdevs": 2, 00:18:22.379 "num_base_bdevs_discovered": 1, 00:18:22.379 "num_base_bdevs_operational": 1, 00:18:22.379 "base_bdevs_list": [ 00:18:22.379 { 00:18:22.379 "name": null, 00:18:22.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.379 "is_configured": false, 00:18:22.379 "data_offset": 0, 00:18:22.379 "data_size": 63488 00:18:22.379 }, 00:18:22.379 { 00:18:22.379 "name": "BaseBdev2", 00:18:22.379 "uuid": "56b94c9b-a122-501f-b395-7a9257ea2230", 00:18:22.379 "is_configured": true, 00:18:22.379 "data_offset": 2048, 00:18:22.379 "data_size": 63488 00:18:22.379 } 00:18:22.379 ] 00:18:22.379 }' 00:18:22.379 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.379 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:22.379 06:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.379 06:44:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:22.379 06:44:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:22.379 06:44:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.379 06:44:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:22.379 06:44:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.379 06:44:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:22.379 06:44:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.379 06:44:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:22.379 [2024-12-06 06:44:41.017716] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:22.379 [2024-12-06 06:44:41.017796] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.379 [2024-12-06 06:44:41.017837] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:22.379 [2024-12-06 06:44:41.017861] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.379 [2024-12-06 06:44:41.018440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.379 [2024-12-06 06:44:41.018482] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:22.379 [2024-12-06 06:44:41.018598] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:22.379 [2024-12-06 06:44:41.018629] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:22.379 [2024-12-06 06:44:41.018641] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:22.379 [2024-12-06 06:44:41.018657] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:22.379 BaseBdev1 00:18:22.379 06:44:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.379 06:44:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:23.748 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:23.748 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.748 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.748 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.748 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.748 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:23.748 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.748 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.748 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.748 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.748 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.748 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.748 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.748 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:23.748 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.748 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.748 "name": "raid_bdev1", 00:18:23.748 "uuid": "2dd1415d-aef0-4e02-9739-72e6d6c7ea10", 00:18:23.748 "strip_size_kb": 0, 00:18:23.748 "state": "online", 00:18:23.748 "raid_level": "raid1", 00:18:23.748 "superblock": true, 00:18:23.748 "num_base_bdevs": 2, 00:18:23.748 "num_base_bdevs_discovered": 1, 00:18:23.748 "num_base_bdevs_operational": 1, 00:18:23.748 "base_bdevs_list": [ 00:18:23.748 { 00:18:23.748 "name": null, 00:18:23.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.748 "is_configured": false, 00:18:23.748 "data_offset": 0, 00:18:23.748 "data_size": 63488 00:18:23.748 }, 00:18:23.748 { 00:18:23.748 "name": "BaseBdev2", 00:18:23.748 "uuid": "56b94c9b-a122-501f-b395-7a9257ea2230", 00:18:23.748 "is_configured": true, 00:18:23.748 "data_offset": 2048, 00:18:23.748 "data_size": 63488 00:18:23.748 } 00:18:23.748 ] 00:18:23.748 }' 00:18:23.748 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.748 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:24.005 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:24.005 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.005 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:24.005 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:24.005 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.005 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.005 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.005 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.005 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:24.005 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.005 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.005 "name": "raid_bdev1", 00:18:24.005 "uuid": "2dd1415d-aef0-4e02-9739-72e6d6c7ea10", 00:18:24.005 "strip_size_kb": 0, 00:18:24.005 "state": "online", 00:18:24.005 "raid_level": "raid1", 00:18:24.005 "superblock": true, 00:18:24.005 "num_base_bdevs": 2, 00:18:24.005 "num_base_bdevs_discovered": 1, 00:18:24.005 "num_base_bdevs_operational": 1, 00:18:24.005 "base_bdevs_list": [ 00:18:24.005 { 00:18:24.005 "name": null, 00:18:24.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.005 "is_configured": false, 00:18:24.005 "data_offset": 0, 00:18:24.005 "data_size": 63488 00:18:24.005 }, 00:18:24.006 { 00:18:24.006 "name": "BaseBdev2", 00:18:24.006 "uuid": "56b94c9b-a122-501f-b395-7a9257ea2230", 00:18:24.006 "is_configured": true, 00:18:24.006 "data_offset": 2048, 00:18:24.006 "data_size": 63488 00:18:24.006 } 00:18:24.006 ] 00:18:24.006 }' 00:18:24.006 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.264 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:24.264 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.264 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:24.264 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:24.264 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:18:24.264 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:24.264 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:24.264 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.264 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:24.264 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.264 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:24.264 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.264 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:24.264 [2024-12-06 06:44:42.734695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:24.264 [2024-12-06 06:44:42.734934] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:24.264 [2024-12-06 06:44:42.734954] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:24.264 request: 00:18:24.264 { 00:18:24.264 "base_bdev": "BaseBdev1", 00:18:24.264 "raid_bdev": "raid_bdev1", 00:18:24.264 "method": "bdev_raid_add_base_bdev", 00:18:24.264 "req_id": 1 00:18:24.264 } 00:18:24.264 Got JSON-RPC error response 00:18:24.264 response: 00:18:24.264 { 00:18:24.264 "code": -22, 00:18:24.264 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:24.264 } 00:18:24.264 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:24.264 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:18:24.264 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:24.264 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:24.264 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:24.264 06:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:25.203 06:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:25.203 06:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.203 06:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.203 06:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.203 06:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.203 06:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:25.203 06:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.203 06:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.203 06:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.203 06:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.203 06:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.203 06:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.203 06:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.203 06:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:25.203 06:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.203 06:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.203 "name": "raid_bdev1", 00:18:25.203 "uuid": "2dd1415d-aef0-4e02-9739-72e6d6c7ea10", 00:18:25.203 "strip_size_kb": 0, 00:18:25.203 "state": "online", 00:18:25.203 "raid_level": "raid1", 00:18:25.203 "superblock": true, 00:18:25.203 "num_base_bdevs": 2, 00:18:25.203 "num_base_bdevs_discovered": 1, 00:18:25.203 "num_base_bdevs_operational": 1, 00:18:25.203 "base_bdevs_list": [ 00:18:25.203 { 00:18:25.203 "name": null, 00:18:25.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.203 "is_configured": false, 00:18:25.203 "data_offset": 0, 00:18:25.203 "data_size": 63488 00:18:25.203 }, 00:18:25.203 { 00:18:25.203 "name": "BaseBdev2", 00:18:25.203 "uuid": "56b94c9b-a122-501f-b395-7a9257ea2230", 00:18:25.203 "is_configured": true, 00:18:25.203 "data_offset": 2048, 00:18:25.203 "data_size": 63488 00:18:25.203 } 00:18:25.203 ] 00:18:25.203 }' 00:18:25.203 06:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.203 06:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:25.809 06:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:25.809 06:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.809 06:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:25.809 06:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:25.809 06:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.809 06:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.809 06:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.809 06:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:25.809 06:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.809 06:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.809 06:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.809 "name": "raid_bdev1", 00:18:25.809 "uuid": "2dd1415d-aef0-4e02-9739-72e6d6c7ea10", 00:18:25.809 "strip_size_kb": 0, 00:18:25.809 "state": "online", 00:18:25.809 "raid_level": "raid1", 00:18:25.809 "superblock": true, 00:18:25.809 "num_base_bdevs": 2, 00:18:25.809 "num_base_bdevs_discovered": 1, 00:18:25.809 "num_base_bdevs_operational": 1, 00:18:25.809 "base_bdevs_list": [ 00:18:25.809 { 00:18:25.809 "name": null, 00:18:25.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.809 "is_configured": false, 00:18:25.809 "data_offset": 0, 00:18:25.809 "data_size": 63488 00:18:25.809 }, 00:18:25.809 { 00:18:25.809 "name": "BaseBdev2", 00:18:25.809 "uuid": "56b94c9b-a122-501f-b395-7a9257ea2230", 00:18:25.809 "is_configured": true, 00:18:25.809 "data_offset": 2048, 00:18:25.809 "data_size": 63488 00:18:25.809 } 00:18:25.809 ] 00:18:25.809 }' 00:18:25.809 06:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.809 06:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:25.809 06:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.809 06:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:25.809 06:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77249 00:18:25.809 06:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77249 ']' 00:18:25.809 06:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77249 00:18:25.809 06:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:18:25.809 06:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.809 06:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77249 00:18:26.067 killing process with pid 77249 00:18:26.067 Received shutdown signal, test time was about 18.534366 seconds 00:18:26.067 00:18:26.067 Latency(us) 00:18:26.067 [2024-12-06T06:44:44.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.067 [2024-12-06T06:44:44.714Z] =================================================================================================================== 00:18:26.067 [2024-12-06T06:44:44.714Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:26.067 06:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:26.067 06:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:26.067 06:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77249' 00:18:26.067 06:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77249 00:18:26.067 [2024-12-06 06:44:44.473927] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:26.067 06:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77249 00:18:26.067 [2024-12-06 06:44:44.474127] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:26.067 [2024-12-06 06:44:44.474205] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:26.067 [2024-12-06 06:44:44.474233] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:26.067 [2024-12-06 06:44:44.684889] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:18:27.442 00:18:27.442 real 0m21.881s 00:18:27.442 user 0m29.772s 00:18:27.442 sys 0m2.108s 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:27.442 ************************************ 00:18:27.442 END TEST raid_rebuild_test_sb_io 00:18:27.442 ************************************ 00:18:27.442 06:44:45 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:18:27.442 06:44:45 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:18:27.442 06:44:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:27.442 06:44:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:27.442 06:44:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:27.442 ************************************ 00:18:27.442 START TEST raid_rebuild_test 00:18:27.442 ************************************ 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77955 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77955 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77955 ']' 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:27.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:27.442 06:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.442 [2024-12-06 06:44:45.989743] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:18:27.442 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:27.442 Zero copy mechanism will not be used. 00:18:27.442 [2024-12-06 06:44:45.989925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77955 ] 00:18:27.701 [2024-12-06 06:44:46.181875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.701 [2024-12-06 06:44:46.324658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.959 [2024-12-06 06:44:46.529834] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:27.959 [2024-12-06 06:44:46.529913] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:28.524 06:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.524 06:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:18:28.524 06:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:28.524 06:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:28.524 06:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.524 06:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.524 BaseBdev1_malloc 00:18:28.524 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.524 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:28.524 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.524 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.524 [2024-12-06 06:44:47.037582] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:28.524 [2024-12-06 06:44:47.037652] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.524 [2024-12-06 06:44:47.037682] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:28.524 [2024-12-06 06:44:47.037699] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.524 [2024-12-06 06:44:47.040491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.524 [2024-12-06 06:44:47.040554] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:28.524 BaseBdev1 00:18:28.524 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.524 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:28.524 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:28.524 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.524 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.524 BaseBdev2_malloc 00:18:28.524 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.524 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:28.524 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.524 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.524 [2024-12-06 06:44:47.085871] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:28.524 [2024-12-06 06:44:47.085942] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.524 [2024-12-06 06:44:47.085973] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:28.524 [2024-12-06 06:44:47.085991] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.524 [2024-12-06 06:44:47.088738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.525 [2024-12-06 06:44:47.088781] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:28.525 BaseBdev2 00:18:28.525 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.525 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:28.525 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:28.525 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.525 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.525 BaseBdev3_malloc 00:18:28.525 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.525 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:28.525 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.525 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.525 [2024-12-06 06:44:47.151116] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:28.525 [2024-12-06 06:44:47.151183] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.525 [2024-12-06 06:44:47.151222] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:28.525 [2024-12-06 06:44:47.151239] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.525 [2024-12-06 06:44:47.154039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.525 [2024-12-06 06:44:47.154084] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:28.525 BaseBdev3 00:18:28.525 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.525 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:28.525 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:28.525 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.525 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.783 BaseBdev4_malloc 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.783 [2024-12-06 06:44:47.207436] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:28.783 [2024-12-06 06:44:47.207509] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.783 [2024-12-06 06:44:47.207553] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:28.783 [2024-12-06 06:44:47.207574] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.783 [2024-12-06 06:44:47.210292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.783 [2024-12-06 06:44:47.210339] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:28.783 BaseBdev4 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.783 spare_malloc 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.783 spare_delay 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.783 [2024-12-06 06:44:47.268274] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:28.783 [2024-12-06 06:44:47.268338] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.783 [2024-12-06 06:44:47.268364] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:28.783 [2024-12-06 06:44:47.268382] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.783 [2024-12-06 06:44:47.271210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.783 [2024-12-06 06:44:47.271258] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:28.783 spare 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.783 [2024-12-06 06:44:47.276307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:28.783 [2024-12-06 06:44:47.278752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:28.783 [2024-12-06 06:44:47.278844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:28.783 [2024-12-06 06:44:47.278925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:28.783 [2024-12-06 06:44:47.279036] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:28.783 [2024-12-06 06:44:47.279058] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:28.783 [2024-12-06 06:44:47.279388] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:28.783 [2024-12-06 06:44:47.279638] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:28.783 [2024-12-06 06:44:47.279668] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:28.783 [2024-12-06 06:44:47.279869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.783 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.783 "name": "raid_bdev1", 00:18:28.783 "uuid": "28508cfa-3642-4d29-a541-6345bf35bed5", 00:18:28.783 "strip_size_kb": 0, 00:18:28.783 "state": "online", 00:18:28.783 "raid_level": "raid1", 00:18:28.783 "superblock": false, 00:18:28.783 "num_base_bdevs": 4, 00:18:28.783 "num_base_bdevs_discovered": 4, 00:18:28.784 "num_base_bdevs_operational": 4, 00:18:28.784 "base_bdevs_list": [ 00:18:28.784 { 00:18:28.784 "name": "BaseBdev1", 00:18:28.784 "uuid": "7c1a4c50-f8af-593a-9782-7452b0e8aee6", 00:18:28.784 "is_configured": true, 00:18:28.784 "data_offset": 0, 00:18:28.784 "data_size": 65536 00:18:28.784 }, 00:18:28.784 { 00:18:28.784 "name": "BaseBdev2", 00:18:28.784 "uuid": "b29d42da-130e-5276-9e91-12ba33a4f7eb", 00:18:28.784 "is_configured": true, 00:18:28.784 "data_offset": 0, 00:18:28.784 "data_size": 65536 00:18:28.784 }, 00:18:28.784 { 00:18:28.784 "name": "BaseBdev3", 00:18:28.784 "uuid": "370f9baf-c9cb-5deb-87cb-5d25172dc3af", 00:18:28.784 "is_configured": true, 00:18:28.784 "data_offset": 0, 00:18:28.784 "data_size": 65536 00:18:28.784 }, 00:18:28.784 { 00:18:28.784 "name": "BaseBdev4", 00:18:28.784 "uuid": "30a5b207-0e22-55bf-8df6-0d6f4f9a6ed7", 00:18:28.784 "is_configured": true, 00:18:28.784 "data_offset": 0, 00:18:28.784 "data_size": 65536 00:18:28.784 } 00:18:28.784 ] 00:18:28.784 }' 00:18:28.784 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.784 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.350 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:29.350 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:29.350 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.350 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.350 [2024-12-06 06:44:47.796952] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:29.350 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.350 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:18:29.350 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:29.350 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.350 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.350 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.350 06:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.350 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:29.350 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:29.350 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:29.350 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:29.350 06:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:29.350 06:44:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:29.350 06:44:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:29.350 06:44:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:29.350 06:44:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:29.350 06:44:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:29.350 06:44:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:29.350 06:44:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:29.350 06:44:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:29.350 06:44:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:29.610 [2024-12-06 06:44:48.180719] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:29.610 /dev/nbd0 00:18:29.610 06:44:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:29.610 06:44:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:29.610 06:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:29.610 06:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:29.610 06:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:29.610 06:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:29.610 06:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:29.610 06:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:29.610 06:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:29.610 06:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:29.610 06:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:29.610 1+0 records in 00:18:29.610 1+0 records out 00:18:29.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292597 s, 14.0 MB/s 00:18:29.610 06:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:29.610 06:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:29.610 06:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:29.610 06:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:29.610 06:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:29.610 06:44:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:29.610 06:44:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:29.610 06:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:29.610 06:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:29.610 06:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:18:39.702 65536+0 records in 00:18:39.702 65536+0 records out 00:18:39.702 33554432 bytes (34 MB, 32 MiB) copied, 8.61977 s, 3.9 MB/s 00:18:39.702 06:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:39.702 06:44:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:39.702 06:44:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:39.702 06:44:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:39.702 06:44:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:39.702 06:44:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:39.702 06:44:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:39.702 [2024-12-06 06:44:57.161917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.702 [2024-12-06 06:44:57.190025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.702 "name": "raid_bdev1", 00:18:39.702 "uuid": "28508cfa-3642-4d29-a541-6345bf35bed5", 00:18:39.702 "strip_size_kb": 0, 00:18:39.702 "state": "online", 00:18:39.702 "raid_level": "raid1", 00:18:39.702 "superblock": false, 00:18:39.702 "num_base_bdevs": 4, 00:18:39.702 "num_base_bdevs_discovered": 3, 00:18:39.702 "num_base_bdevs_operational": 3, 00:18:39.702 "base_bdevs_list": [ 00:18:39.702 { 00:18:39.702 "name": null, 00:18:39.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.702 "is_configured": false, 00:18:39.702 "data_offset": 0, 00:18:39.702 "data_size": 65536 00:18:39.702 }, 00:18:39.702 { 00:18:39.702 "name": "BaseBdev2", 00:18:39.702 "uuid": "b29d42da-130e-5276-9e91-12ba33a4f7eb", 00:18:39.702 "is_configured": true, 00:18:39.702 "data_offset": 0, 00:18:39.702 "data_size": 65536 00:18:39.702 }, 00:18:39.702 { 00:18:39.702 "name": "BaseBdev3", 00:18:39.702 "uuid": "370f9baf-c9cb-5deb-87cb-5d25172dc3af", 00:18:39.702 "is_configured": true, 00:18:39.702 "data_offset": 0, 00:18:39.702 "data_size": 65536 00:18:39.702 }, 00:18:39.702 { 00:18:39.702 "name": "BaseBdev4", 00:18:39.702 "uuid": "30a5b207-0e22-55bf-8df6-0d6f4f9a6ed7", 00:18:39.702 "is_configured": true, 00:18:39.702 "data_offset": 0, 00:18:39.702 "data_size": 65536 00:18:39.702 } 00:18:39.702 ] 00:18:39.702 }' 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.702 [2024-12-06 06:44:57.670207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:39.702 [2024-12-06 06:44:57.685044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.702 06:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:39.702 [2024-12-06 06:44:57.687663] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:40.273 06:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:40.273 06:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.273 06:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:40.273 06:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:40.273 06:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.273 06:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.273 06:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.273 06:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.273 06:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.274 06:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.274 06:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.274 "name": "raid_bdev1", 00:18:40.274 "uuid": "28508cfa-3642-4d29-a541-6345bf35bed5", 00:18:40.274 "strip_size_kb": 0, 00:18:40.274 "state": "online", 00:18:40.274 "raid_level": "raid1", 00:18:40.274 "superblock": false, 00:18:40.274 "num_base_bdevs": 4, 00:18:40.274 "num_base_bdevs_discovered": 4, 00:18:40.274 "num_base_bdevs_operational": 4, 00:18:40.274 "process": { 00:18:40.274 "type": "rebuild", 00:18:40.274 "target": "spare", 00:18:40.274 "progress": { 00:18:40.274 "blocks": 20480, 00:18:40.274 "percent": 31 00:18:40.274 } 00:18:40.274 }, 00:18:40.274 "base_bdevs_list": [ 00:18:40.274 { 00:18:40.274 "name": "spare", 00:18:40.274 "uuid": "702a29d6-84b2-5aaf-8201-046bb3be8791", 00:18:40.274 "is_configured": true, 00:18:40.274 "data_offset": 0, 00:18:40.274 "data_size": 65536 00:18:40.274 }, 00:18:40.274 { 00:18:40.274 "name": "BaseBdev2", 00:18:40.274 "uuid": "b29d42da-130e-5276-9e91-12ba33a4f7eb", 00:18:40.274 "is_configured": true, 00:18:40.274 "data_offset": 0, 00:18:40.274 "data_size": 65536 00:18:40.274 }, 00:18:40.274 { 00:18:40.274 "name": "BaseBdev3", 00:18:40.274 "uuid": "370f9baf-c9cb-5deb-87cb-5d25172dc3af", 00:18:40.274 "is_configured": true, 00:18:40.274 "data_offset": 0, 00:18:40.274 "data_size": 65536 00:18:40.274 }, 00:18:40.274 { 00:18:40.274 "name": "BaseBdev4", 00:18:40.274 "uuid": "30a5b207-0e22-55bf-8df6-0d6f4f9a6ed7", 00:18:40.274 "is_configured": true, 00:18:40.274 "data_offset": 0, 00:18:40.274 "data_size": 65536 00:18:40.274 } 00:18:40.274 ] 00:18:40.274 }' 00:18:40.274 06:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.274 06:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:40.274 06:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:40.274 06:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:40.274 06:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:40.274 06:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.274 06:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.274 [2024-12-06 06:44:58.864776] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:40.274 [2024-12-06 06:44:58.896773] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:40.274 [2024-12-06 06:44:58.896861] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.274 [2024-12-06 06:44:58.896891] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:40.274 [2024-12-06 06:44:58.896905] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:40.274 06:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.274 06:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:40.274 06:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.532 06:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.532 06:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.532 06:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.533 06:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:40.533 06:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.533 06:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.533 06:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.533 06:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.533 06:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.533 06:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.533 06:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.533 06:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.533 06:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.533 06:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.533 "name": "raid_bdev1", 00:18:40.533 "uuid": "28508cfa-3642-4d29-a541-6345bf35bed5", 00:18:40.533 "strip_size_kb": 0, 00:18:40.533 "state": "online", 00:18:40.533 "raid_level": "raid1", 00:18:40.533 "superblock": false, 00:18:40.533 "num_base_bdevs": 4, 00:18:40.533 "num_base_bdevs_discovered": 3, 00:18:40.533 "num_base_bdevs_operational": 3, 00:18:40.533 "base_bdevs_list": [ 00:18:40.533 { 00:18:40.533 "name": null, 00:18:40.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.533 "is_configured": false, 00:18:40.533 "data_offset": 0, 00:18:40.533 "data_size": 65536 00:18:40.533 }, 00:18:40.533 { 00:18:40.533 "name": "BaseBdev2", 00:18:40.533 "uuid": "b29d42da-130e-5276-9e91-12ba33a4f7eb", 00:18:40.533 "is_configured": true, 00:18:40.533 "data_offset": 0, 00:18:40.533 "data_size": 65536 00:18:40.533 }, 00:18:40.533 { 00:18:40.533 "name": "BaseBdev3", 00:18:40.533 "uuid": "370f9baf-c9cb-5deb-87cb-5d25172dc3af", 00:18:40.533 "is_configured": true, 00:18:40.533 "data_offset": 0, 00:18:40.533 "data_size": 65536 00:18:40.533 }, 00:18:40.533 { 00:18:40.533 "name": "BaseBdev4", 00:18:40.533 "uuid": "30a5b207-0e22-55bf-8df6-0d6f4f9a6ed7", 00:18:40.533 "is_configured": true, 00:18:40.533 "data_offset": 0, 00:18:40.533 "data_size": 65536 00:18:40.533 } 00:18:40.533 ] 00:18:40.533 }' 00:18:40.533 06:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.533 06:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.099 06:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:41.099 06:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.099 06:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:41.099 06:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:41.099 06:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.099 06:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.099 06:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.099 06:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.099 06:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.099 06:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.099 06:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.099 "name": "raid_bdev1", 00:18:41.099 "uuid": "28508cfa-3642-4d29-a541-6345bf35bed5", 00:18:41.099 "strip_size_kb": 0, 00:18:41.099 "state": "online", 00:18:41.099 "raid_level": "raid1", 00:18:41.099 "superblock": false, 00:18:41.099 "num_base_bdevs": 4, 00:18:41.099 "num_base_bdevs_discovered": 3, 00:18:41.099 "num_base_bdevs_operational": 3, 00:18:41.099 "base_bdevs_list": [ 00:18:41.099 { 00:18:41.099 "name": null, 00:18:41.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.099 "is_configured": false, 00:18:41.099 "data_offset": 0, 00:18:41.099 "data_size": 65536 00:18:41.099 }, 00:18:41.099 { 00:18:41.099 "name": "BaseBdev2", 00:18:41.099 "uuid": "b29d42da-130e-5276-9e91-12ba33a4f7eb", 00:18:41.099 "is_configured": true, 00:18:41.099 "data_offset": 0, 00:18:41.099 "data_size": 65536 00:18:41.099 }, 00:18:41.099 { 00:18:41.099 "name": "BaseBdev3", 00:18:41.099 "uuid": "370f9baf-c9cb-5deb-87cb-5d25172dc3af", 00:18:41.099 "is_configured": true, 00:18:41.099 "data_offset": 0, 00:18:41.099 "data_size": 65536 00:18:41.099 }, 00:18:41.099 { 00:18:41.099 "name": "BaseBdev4", 00:18:41.099 "uuid": "30a5b207-0e22-55bf-8df6-0d6f4f9a6ed7", 00:18:41.099 "is_configured": true, 00:18:41.099 "data_offset": 0, 00:18:41.099 "data_size": 65536 00:18:41.099 } 00:18:41.099 ] 00:18:41.099 }' 00:18:41.099 06:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.099 06:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:41.099 06:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.099 06:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:41.099 06:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:41.099 06:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.099 06:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.099 [2024-12-06 06:44:59.625228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:41.099 [2024-12-06 06:44:59.639166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:18:41.099 06:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.099 06:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:41.099 [2024-12-06 06:44:59.641776] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:42.037 06:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.037 06:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.037 06:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.037 06:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.037 06:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.037 06:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.037 06:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.037 06:45:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.037 06:45:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.037 06:45:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.295 06:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.295 "name": "raid_bdev1", 00:18:42.295 "uuid": "28508cfa-3642-4d29-a541-6345bf35bed5", 00:18:42.295 "strip_size_kb": 0, 00:18:42.295 "state": "online", 00:18:42.295 "raid_level": "raid1", 00:18:42.295 "superblock": false, 00:18:42.296 "num_base_bdevs": 4, 00:18:42.296 "num_base_bdevs_discovered": 4, 00:18:42.296 "num_base_bdevs_operational": 4, 00:18:42.296 "process": { 00:18:42.296 "type": "rebuild", 00:18:42.296 "target": "spare", 00:18:42.296 "progress": { 00:18:42.296 "blocks": 20480, 00:18:42.296 "percent": 31 00:18:42.296 } 00:18:42.296 }, 00:18:42.296 "base_bdevs_list": [ 00:18:42.296 { 00:18:42.296 "name": "spare", 00:18:42.296 "uuid": "702a29d6-84b2-5aaf-8201-046bb3be8791", 00:18:42.296 "is_configured": true, 00:18:42.296 "data_offset": 0, 00:18:42.296 "data_size": 65536 00:18:42.296 }, 00:18:42.296 { 00:18:42.296 "name": "BaseBdev2", 00:18:42.296 "uuid": "b29d42da-130e-5276-9e91-12ba33a4f7eb", 00:18:42.296 "is_configured": true, 00:18:42.296 "data_offset": 0, 00:18:42.296 "data_size": 65536 00:18:42.296 }, 00:18:42.296 { 00:18:42.296 "name": "BaseBdev3", 00:18:42.296 "uuid": "370f9baf-c9cb-5deb-87cb-5d25172dc3af", 00:18:42.296 "is_configured": true, 00:18:42.296 "data_offset": 0, 00:18:42.296 "data_size": 65536 00:18:42.296 }, 00:18:42.296 { 00:18:42.296 "name": "BaseBdev4", 00:18:42.296 "uuid": "30a5b207-0e22-55bf-8df6-0d6f4f9a6ed7", 00:18:42.296 "is_configured": true, 00:18:42.296 "data_offset": 0, 00:18:42.296 "data_size": 65536 00:18:42.296 } 00:18:42.296 ] 00:18:42.296 }' 00:18:42.296 06:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.296 06:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:42.296 06:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.296 06:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:42.296 06:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:42.296 06:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:42.296 06:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:42.296 06:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:18:42.296 06:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:42.296 06:45:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.296 06:45:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.296 [2024-12-06 06:45:00.811045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:42.296 [2024-12-06 06:45:00.851032] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:18:42.296 06:45:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.296 06:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:18:42.296 06:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:18:42.296 06:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.296 06:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.296 06:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.296 06:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.296 06:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.296 06:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.296 06:45:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.296 06:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.296 06:45:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.296 06:45:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.296 06:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.296 "name": "raid_bdev1", 00:18:42.296 "uuid": "28508cfa-3642-4d29-a541-6345bf35bed5", 00:18:42.296 "strip_size_kb": 0, 00:18:42.296 "state": "online", 00:18:42.296 "raid_level": "raid1", 00:18:42.296 "superblock": false, 00:18:42.296 "num_base_bdevs": 4, 00:18:42.296 "num_base_bdevs_discovered": 3, 00:18:42.296 "num_base_bdevs_operational": 3, 00:18:42.296 "process": { 00:18:42.296 "type": "rebuild", 00:18:42.296 "target": "spare", 00:18:42.296 "progress": { 00:18:42.296 "blocks": 24576, 00:18:42.296 "percent": 37 00:18:42.296 } 00:18:42.296 }, 00:18:42.296 "base_bdevs_list": [ 00:18:42.296 { 00:18:42.296 "name": "spare", 00:18:42.296 "uuid": "702a29d6-84b2-5aaf-8201-046bb3be8791", 00:18:42.296 "is_configured": true, 00:18:42.296 "data_offset": 0, 00:18:42.296 "data_size": 65536 00:18:42.296 }, 00:18:42.296 { 00:18:42.296 "name": null, 00:18:42.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.296 "is_configured": false, 00:18:42.296 "data_offset": 0, 00:18:42.296 "data_size": 65536 00:18:42.296 }, 00:18:42.296 { 00:18:42.296 "name": "BaseBdev3", 00:18:42.296 "uuid": "370f9baf-c9cb-5deb-87cb-5d25172dc3af", 00:18:42.296 "is_configured": true, 00:18:42.296 "data_offset": 0, 00:18:42.296 "data_size": 65536 00:18:42.296 }, 00:18:42.296 { 00:18:42.296 "name": "BaseBdev4", 00:18:42.296 "uuid": "30a5b207-0e22-55bf-8df6-0d6f4f9a6ed7", 00:18:42.296 "is_configured": true, 00:18:42.296 "data_offset": 0, 00:18:42.296 "data_size": 65536 00:18:42.296 } 00:18:42.296 ] 00:18:42.296 }' 00:18:42.296 06:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.554 06:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:42.554 06:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.554 06:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:42.554 06:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=481 00:18:42.554 06:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:42.554 06:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.554 06:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.554 06:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.554 06:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.554 06:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.554 06:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.554 06:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.554 06:45:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.554 06:45:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.554 06:45:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.554 06:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.554 "name": "raid_bdev1", 00:18:42.554 "uuid": "28508cfa-3642-4d29-a541-6345bf35bed5", 00:18:42.554 "strip_size_kb": 0, 00:18:42.554 "state": "online", 00:18:42.554 "raid_level": "raid1", 00:18:42.554 "superblock": false, 00:18:42.554 "num_base_bdevs": 4, 00:18:42.554 "num_base_bdevs_discovered": 3, 00:18:42.554 "num_base_bdevs_operational": 3, 00:18:42.554 "process": { 00:18:42.554 "type": "rebuild", 00:18:42.554 "target": "spare", 00:18:42.554 "progress": { 00:18:42.554 "blocks": 26624, 00:18:42.554 "percent": 40 00:18:42.554 } 00:18:42.554 }, 00:18:42.554 "base_bdevs_list": [ 00:18:42.554 { 00:18:42.554 "name": "spare", 00:18:42.554 "uuid": "702a29d6-84b2-5aaf-8201-046bb3be8791", 00:18:42.554 "is_configured": true, 00:18:42.554 "data_offset": 0, 00:18:42.554 "data_size": 65536 00:18:42.554 }, 00:18:42.554 { 00:18:42.554 "name": null, 00:18:42.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.554 "is_configured": false, 00:18:42.554 "data_offset": 0, 00:18:42.554 "data_size": 65536 00:18:42.554 }, 00:18:42.554 { 00:18:42.554 "name": "BaseBdev3", 00:18:42.554 "uuid": "370f9baf-c9cb-5deb-87cb-5d25172dc3af", 00:18:42.554 "is_configured": true, 00:18:42.554 "data_offset": 0, 00:18:42.554 "data_size": 65536 00:18:42.554 }, 00:18:42.554 { 00:18:42.554 "name": "BaseBdev4", 00:18:42.554 "uuid": "30a5b207-0e22-55bf-8df6-0d6f4f9a6ed7", 00:18:42.554 "is_configured": true, 00:18:42.554 "data_offset": 0, 00:18:42.554 "data_size": 65536 00:18:42.554 } 00:18:42.554 ] 00:18:42.554 }' 00:18:42.554 06:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.554 06:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:42.554 06:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.554 06:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:42.554 06:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:43.543 06:45:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:43.543 06:45:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:43.543 06:45:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.543 06:45:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:43.543 06:45:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:43.543 06:45:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.543 06:45:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.543 06:45:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.543 06:45:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.543 06:45:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.801 06:45:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.801 06:45:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.801 "name": "raid_bdev1", 00:18:43.801 "uuid": "28508cfa-3642-4d29-a541-6345bf35bed5", 00:18:43.801 "strip_size_kb": 0, 00:18:43.801 "state": "online", 00:18:43.801 "raid_level": "raid1", 00:18:43.801 "superblock": false, 00:18:43.801 "num_base_bdevs": 4, 00:18:43.801 "num_base_bdevs_discovered": 3, 00:18:43.801 "num_base_bdevs_operational": 3, 00:18:43.801 "process": { 00:18:43.801 "type": "rebuild", 00:18:43.801 "target": "spare", 00:18:43.801 "progress": { 00:18:43.801 "blocks": 51200, 00:18:43.801 "percent": 78 00:18:43.801 } 00:18:43.801 }, 00:18:43.801 "base_bdevs_list": [ 00:18:43.801 { 00:18:43.801 "name": "spare", 00:18:43.801 "uuid": "702a29d6-84b2-5aaf-8201-046bb3be8791", 00:18:43.801 "is_configured": true, 00:18:43.801 "data_offset": 0, 00:18:43.801 "data_size": 65536 00:18:43.801 }, 00:18:43.801 { 00:18:43.801 "name": null, 00:18:43.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.801 "is_configured": false, 00:18:43.801 "data_offset": 0, 00:18:43.801 "data_size": 65536 00:18:43.801 }, 00:18:43.801 { 00:18:43.801 "name": "BaseBdev3", 00:18:43.801 "uuid": "370f9baf-c9cb-5deb-87cb-5d25172dc3af", 00:18:43.801 "is_configured": true, 00:18:43.801 "data_offset": 0, 00:18:43.801 "data_size": 65536 00:18:43.801 }, 00:18:43.801 { 00:18:43.801 "name": "BaseBdev4", 00:18:43.801 "uuid": "30a5b207-0e22-55bf-8df6-0d6f4f9a6ed7", 00:18:43.801 "is_configured": true, 00:18:43.801 "data_offset": 0, 00:18:43.801 "data_size": 65536 00:18:43.801 } 00:18:43.801 ] 00:18:43.801 }' 00:18:43.801 06:45:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.801 06:45:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:43.801 06:45:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.801 06:45:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:43.801 06:45:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:44.370 [2024-12-06 06:45:02.866311] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:44.370 [2024-12-06 06:45:02.866426] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:44.370 [2024-12-06 06:45:02.866501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:44.937 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:44.937 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:44.937 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.937 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:44.937 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:44.937 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.937 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.937 06:45:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.937 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.937 06:45:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.937 06:45:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.937 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.937 "name": "raid_bdev1", 00:18:44.937 "uuid": "28508cfa-3642-4d29-a541-6345bf35bed5", 00:18:44.937 "strip_size_kb": 0, 00:18:44.937 "state": "online", 00:18:44.937 "raid_level": "raid1", 00:18:44.937 "superblock": false, 00:18:44.937 "num_base_bdevs": 4, 00:18:44.937 "num_base_bdevs_discovered": 3, 00:18:44.937 "num_base_bdevs_operational": 3, 00:18:44.937 "base_bdevs_list": [ 00:18:44.937 { 00:18:44.937 "name": "spare", 00:18:44.937 "uuid": "702a29d6-84b2-5aaf-8201-046bb3be8791", 00:18:44.937 "is_configured": true, 00:18:44.937 "data_offset": 0, 00:18:44.937 "data_size": 65536 00:18:44.937 }, 00:18:44.937 { 00:18:44.937 "name": null, 00:18:44.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.937 "is_configured": false, 00:18:44.937 "data_offset": 0, 00:18:44.937 "data_size": 65536 00:18:44.937 }, 00:18:44.937 { 00:18:44.937 "name": "BaseBdev3", 00:18:44.937 "uuid": "370f9baf-c9cb-5deb-87cb-5d25172dc3af", 00:18:44.937 "is_configured": true, 00:18:44.937 "data_offset": 0, 00:18:44.937 "data_size": 65536 00:18:44.937 }, 00:18:44.937 { 00:18:44.937 "name": "BaseBdev4", 00:18:44.937 "uuid": "30a5b207-0e22-55bf-8df6-0d6f4f9a6ed7", 00:18:44.937 "is_configured": true, 00:18:44.937 "data_offset": 0, 00:18:44.937 "data_size": 65536 00:18:44.937 } 00:18:44.937 ] 00:18:44.937 }' 00:18:44.937 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.937 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:44.937 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.937 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:44.937 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:44.937 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:44.937 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.937 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:44.937 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:44.937 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.937 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.937 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.937 06:45:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.937 06:45:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.937 06:45:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.937 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.937 "name": "raid_bdev1", 00:18:44.937 "uuid": "28508cfa-3642-4d29-a541-6345bf35bed5", 00:18:44.937 "strip_size_kb": 0, 00:18:44.937 "state": "online", 00:18:44.937 "raid_level": "raid1", 00:18:44.937 "superblock": false, 00:18:44.937 "num_base_bdevs": 4, 00:18:44.937 "num_base_bdevs_discovered": 3, 00:18:44.937 "num_base_bdevs_operational": 3, 00:18:44.937 "base_bdevs_list": [ 00:18:44.937 { 00:18:44.937 "name": "spare", 00:18:44.937 "uuid": "702a29d6-84b2-5aaf-8201-046bb3be8791", 00:18:44.937 "is_configured": true, 00:18:44.937 "data_offset": 0, 00:18:44.937 "data_size": 65536 00:18:44.937 }, 00:18:44.937 { 00:18:44.937 "name": null, 00:18:44.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.937 "is_configured": false, 00:18:44.937 "data_offset": 0, 00:18:44.937 "data_size": 65536 00:18:44.937 }, 00:18:44.937 { 00:18:44.937 "name": "BaseBdev3", 00:18:44.937 "uuid": "370f9baf-c9cb-5deb-87cb-5d25172dc3af", 00:18:44.937 "is_configured": true, 00:18:44.937 "data_offset": 0, 00:18:44.937 "data_size": 65536 00:18:44.937 }, 00:18:44.937 { 00:18:44.937 "name": "BaseBdev4", 00:18:44.937 "uuid": "30a5b207-0e22-55bf-8df6-0d6f4f9a6ed7", 00:18:44.937 "is_configured": true, 00:18:44.937 "data_offset": 0, 00:18:44.937 "data_size": 65536 00:18:44.937 } 00:18:44.937 ] 00:18:44.937 }' 00:18:44.937 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.197 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:45.197 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.197 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:45.197 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:45.197 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:45.197 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:45.197 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:45.197 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:45.197 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:45.197 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.197 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.197 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.197 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.197 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.197 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.197 06:45:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.197 06:45:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.197 06:45:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.197 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.197 "name": "raid_bdev1", 00:18:45.197 "uuid": "28508cfa-3642-4d29-a541-6345bf35bed5", 00:18:45.197 "strip_size_kb": 0, 00:18:45.197 "state": "online", 00:18:45.197 "raid_level": "raid1", 00:18:45.197 "superblock": false, 00:18:45.197 "num_base_bdevs": 4, 00:18:45.197 "num_base_bdevs_discovered": 3, 00:18:45.197 "num_base_bdevs_operational": 3, 00:18:45.197 "base_bdevs_list": [ 00:18:45.197 { 00:18:45.197 "name": "spare", 00:18:45.197 "uuid": "702a29d6-84b2-5aaf-8201-046bb3be8791", 00:18:45.197 "is_configured": true, 00:18:45.197 "data_offset": 0, 00:18:45.197 "data_size": 65536 00:18:45.197 }, 00:18:45.197 { 00:18:45.197 "name": null, 00:18:45.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.197 "is_configured": false, 00:18:45.197 "data_offset": 0, 00:18:45.197 "data_size": 65536 00:18:45.197 }, 00:18:45.197 { 00:18:45.197 "name": "BaseBdev3", 00:18:45.197 "uuid": "370f9baf-c9cb-5deb-87cb-5d25172dc3af", 00:18:45.197 "is_configured": true, 00:18:45.197 "data_offset": 0, 00:18:45.197 "data_size": 65536 00:18:45.197 }, 00:18:45.197 { 00:18:45.197 "name": "BaseBdev4", 00:18:45.197 "uuid": "30a5b207-0e22-55bf-8df6-0d6f4f9a6ed7", 00:18:45.197 "is_configured": true, 00:18:45.197 "data_offset": 0, 00:18:45.197 "data_size": 65536 00:18:45.197 } 00:18:45.197 ] 00:18:45.197 }' 00:18:45.197 06:45:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.197 06:45:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.763 06:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:45.763 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.763 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.763 [2024-12-06 06:45:04.135021] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:45.763 [2024-12-06 06:45:04.135062] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:45.763 [2024-12-06 06:45:04.135161] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:45.763 [2024-12-06 06:45:04.135283] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:45.763 [2024-12-06 06:45:04.135300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:45.763 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.763 06:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.763 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.763 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.763 06:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:45.763 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.763 06:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:45.763 06:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:45.763 06:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:45.763 06:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:45.763 06:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:45.763 06:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:45.763 06:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:45.763 06:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:45.763 06:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:45.763 06:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:45.763 06:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:45.763 06:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:45.763 06:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:46.022 /dev/nbd0 00:18:46.022 06:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:46.022 06:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:46.022 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:46.022 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:46.022 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:46.022 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:46.022 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:46.022 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:46.022 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:46.022 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:46.022 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:46.022 1+0 records in 00:18:46.022 1+0 records out 00:18:46.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033603 s, 12.2 MB/s 00:18:46.022 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.022 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:46.022 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.022 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:46.022 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:46.022 06:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:46.022 06:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:46.022 06:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:46.281 /dev/nbd1 00:18:46.281 06:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:46.281 06:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:46.281 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:46.281 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:46.281 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:46.281 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:46.281 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:46.281 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:46.281 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:46.281 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:46.281 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:46.281 1+0 records in 00:18:46.281 1+0 records out 00:18:46.281 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421484 s, 9.7 MB/s 00:18:46.281 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.281 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:46.281 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.281 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:46.281 06:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:46.281 06:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:46.281 06:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:46.281 06:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:46.540 06:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:46.540 06:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:46.540 06:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:46.540 06:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:46.541 06:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:46.541 06:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:46.541 06:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:46.800 06:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:46.800 06:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:46.800 06:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:46.800 06:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:46.800 06:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:46.800 06:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:46.800 06:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:46.800 06:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:46.800 06:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:46.800 06:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:47.059 06:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:47.059 06:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:47.059 06:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:47.059 06:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:47.059 06:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:47.059 06:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:47.059 06:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:47.059 06:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:47.059 06:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:47.059 06:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77955 00:18:47.059 06:45:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77955 ']' 00:18:47.059 06:45:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77955 00:18:47.059 06:45:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:18:47.059 06:45:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.059 06:45:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77955 00:18:47.059 06:45:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:47.059 06:45:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:47.059 killing process with pid 77955 00:18:47.059 06:45:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77955' 00:18:47.059 06:45:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77955 00:18:47.059 Received shutdown signal, test time was about 60.000000 seconds 00:18:47.059 00:18:47.059 Latency(us) 00:18:47.059 [2024-12-06T06:45:05.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.059 [2024-12-06T06:45:05.706Z] =================================================================================================================== 00:18:47.059 [2024-12-06T06:45:05.706Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:47.059 [2024-12-06 06:45:05.687646] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:47.059 06:45:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77955 00:18:47.626 [2024-12-06 06:45:06.133804] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:49.004 ************************************ 00:18:49.004 END TEST raid_rebuild_test 00:18:49.004 ************************************ 00:18:49.004 06:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:49.004 00:18:49.004 real 0m21.340s 00:18:49.004 user 0m24.036s 00:18:49.004 sys 0m3.676s 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.005 06:45:07 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:18:49.005 06:45:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:49.005 06:45:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:49.005 06:45:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:49.005 ************************************ 00:18:49.005 START TEST raid_rebuild_test_sb 00:18:49.005 ************************************ 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78441 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78441 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78441 ']' 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.005 06:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.005 [2024-12-06 06:45:07.418419] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:18:49.005 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:49.005 Zero copy mechanism will not be used. 00:18:49.005 [2024-12-06 06:45:07.418616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78441 ] 00:18:49.005 [2024-12-06 06:45:07.605657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.264 [2024-12-06 06:45:07.745049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.522 [2024-12-06 06:45:07.957057] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:49.522 [2024-12-06 06:45:07.957117] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:49.820 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.820 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:49.820 06:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:49.820 06:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:49.820 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.820 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.080 BaseBdev1_malloc 00:18:50.080 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.080 06:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:50.080 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.080 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.080 [2024-12-06 06:45:08.472025] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:50.080 [2024-12-06 06:45:08.472111] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.080 [2024-12-06 06:45:08.472142] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:50.080 [2024-12-06 06:45:08.472161] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.080 [2024-12-06 06:45:08.474952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.080 [2024-12-06 06:45:08.475163] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:50.080 BaseBdev1 00:18:50.080 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.080 06:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:50.080 06:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:50.080 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.080 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.080 BaseBdev2_malloc 00:18:50.080 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.080 06:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:50.080 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.080 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.080 [2024-12-06 06:45:08.525534] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:50.080 [2024-12-06 06:45:08.525630] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.080 [2024-12-06 06:45:08.525664] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:50.080 [2024-12-06 06:45:08.525684] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.080 [2024-12-06 06:45:08.528581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.080 [2024-12-06 06:45:08.528629] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:50.080 BaseBdev2 00:18:50.080 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.080 06:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:50.080 06:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:50.080 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.080 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.080 BaseBdev3_malloc 00:18:50.080 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.080 06:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.081 [2024-12-06 06:45:08.583248] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:50.081 [2024-12-06 06:45:08.583319] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.081 [2024-12-06 06:45:08.583351] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:50.081 [2024-12-06 06:45:08.583369] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.081 [2024-12-06 06:45:08.586089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.081 [2024-12-06 06:45:08.586140] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:50.081 BaseBdev3 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.081 BaseBdev4_malloc 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.081 [2024-12-06 06:45:08.635674] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:50.081 [2024-12-06 06:45:08.635759] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.081 [2024-12-06 06:45:08.635790] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:50.081 [2024-12-06 06:45:08.635807] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.081 [2024-12-06 06:45:08.638604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.081 [2024-12-06 06:45:08.638656] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:50.081 BaseBdev4 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.081 spare_malloc 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.081 spare_delay 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.081 [2024-12-06 06:45:08.703783] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:50.081 [2024-12-06 06:45:08.703853] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.081 [2024-12-06 06:45:08.703882] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:50.081 [2024-12-06 06:45:08.703900] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.081 [2024-12-06 06:45:08.706746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.081 [2024-12-06 06:45:08.706795] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:50.081 spare 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.081 [2024-12-06 06:45:08.711820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:50.081 [2024-12-06 06:45:08.714576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:50.081 [2024-12-06 06:45:08.714780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:50.081 [2024-12-06 06:45:08.714912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:50.081 [2024-12-06 06:45:08.715230] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:50.081 [2024-12-06 06:45:08.715295] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:50.081 [2024-12-06 06:45:08.715747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:50.081 [2024-12-06 06:45:08.716125] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:50.081 [2024-12-06 06:45:08.716246] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:50.081 [2024-12-06 06:45:08.716664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.081 06:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.343 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.343 06:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.343 "name": "raid_bdev1", 00:18:50.343 "uuid": "dbc1be9f-19cf-4cc3-a57d-a613902ab9cf", 00:18:50.343 "strip_size_kb": 0, 00:18:50.343 "state": "online", 00:18:50.343 "raid_level": "raid1", 00:18:50.343 "superblock": true, 00:18:50.343 "num_base_bdevs": 4, 00:18:50.343 "num_base_bdevs_discovered": 4, 00:18:50.343 "num_base_bdevs_operational": 4, 00:18:50.343 "base_bdevs_list": [ 00:18:50.343 { 00:18:50.343 "name": "BaseBdev1", 00:18:50.343 "uuid": "5014407f-e3c2-5f1f-8206-dc10fa46e83a", 00:18:50.343 "is_configured": true, 00:18:50.343 "data_offset": 2048, 00:18:50.343 "data_size": 63488 00:18:50.343 }, 00:18:50.343 { 00:18:50.343 "name": "BaseBdev2", 00:18:50.343 "uuid": "25231058-5899-5a32-9512-61b61798037a", 00:18:50.343 "is_configured": true, 00:18:50.343 "data_offset": 2048, 00:18:50.343 "data_size": 63488 00:18:50.343 }, 00:18:50.343 { 00:18:50.343 "name": "BaseBdev3", 00:18:50.343 "uuid": "f36b5a7e-2eb6-52ac-a5c9-90a8a000c90a", 00:18:50.343 "is_configured": true, 00:18:50.343 "data_offset": 2048, 00:18:50.343 "data_size": 63488 00:18:50.343 }, 00:18:50.343 { 00:18:50.343 "name": "BaseBdev4", 00:18:50.343 "uuid": "40bbb04e-0bb8-566e-88ad-22c47ead8d69", 00:18:50.343 "is_configured": true, 00:18:50.343 "data_offset": 2048, 00:18:50.343 "data_size": 63488 00:18:50.343 } 00:18:50.343 ] 00:18:50.343 }' 00:18:50.343 06:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.343 06:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.911 06:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:50.911 06:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:50.911 06:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.912 06:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.912 [2024-12-06 06:45:09.265273] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:50.912 06:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.912 06:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:18:50.912 06:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.912 06:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.912 06:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.912 06:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:50.912 06:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.912 06:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:50.912 06:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:50.912 06:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:50.912 06:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:50.912 06:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:50.912 06:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:50.912 06:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:50.912 06:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:50.912 06:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:50.912 06:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:50.912 06:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:50.912 06:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:50.912 06:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:50.912 06:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:51.184 [2024-12-06 06:45:09.617041] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:51.184 /dev/nbd0 00:18:51.184 06:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:51.184 06:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:51.184 06:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:51.184 06:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:51.184 06:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:51.184 06:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:51.184 06:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:51.184 06:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:51.184 06:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:51.184 06:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:51.184 06:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:51.184 1+0 records in 00:18:51.184 1+0 records out 00:18:51.184 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033442 s, 12.2 MB/s 00:18:51.184 06:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:51.184 06:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:51.184 06:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:51.185 06:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:51.185 06:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:51.185 06:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:51.185 06:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:51.185 06:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:51.185 06:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:51.185 06:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:18:59.319 63488+0 records in 00:18:59.319 63488+0 records out 00:18:59.319 32505856 bytes (33 MB, 31 MiB) copied, 8.13927 s, 4.0 MB/s 00:18:59.319 06:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:59.320 06:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:59.320 06:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:59.320 06:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:59.320 06:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:59.320 06:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:59.320 06:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:59.578 [2024-12-06 06:45:18.109838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:59.578 06:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:59.578 06:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:59.578 06:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:59.578 06:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:59.578 06:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:59.578 06:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:59.578 06:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:59.578 06:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:59.578 06:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:59.578 06:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.578 06:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.578 [2024-12-06 06:45:18.138450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:59.578 06:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.578 06:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:59.578 06:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:59.578 06:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:59.578 06:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.578 06:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.578 06:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:59.578 06:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.578 06:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.578 06:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.579 06:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.579 06:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.579 06:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.579 06:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.579 06:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.579 06:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.579 06:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.579 "name": "raid_bdev1", 00:18:59.579 "uuid": "dbc1be9f-19cf-4cc3-a57d-a613902ab9cf", 00:18:59.579 "strip_size_kb": 0, 00:18:59.579 "state": "online", 00:18:59.579 "raid_level": "raid1", 00:18:59.579 "superblock": true, 00:18:59.579 "num_base_bdevs": 4, 00:18:59.579 "num_base_bdevs_discovered": 3, 00:18:59.579 "num_base_bdevs_operational": 3, 00:18:59.579 "base_bdevs_list": [ 00:18:59.579 { 00:18:59.579 "name": null, 00:18:59.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.579 "is_configured": false, 00:18:59.579 "data_offset": 0, 00:18:59.579 "data_size": 63488 00:18:59.579 }, 00:18:59.579 { 00:18:59.579 "name": "BaseBdev2", 00:18:59.579 "uuid": "25231058-5899-5a32-9512-61b61798037a", 00:18:59.579 "is_configured": true, 00:18:59.579 "data_offset": 2048, 00:18:59.579 "data_size": 63488 00:18:59.579 }, 00:18:59.579 { 00:18:59.579 "name": "BaseBdev3", 00:18:59.579 "uuid": "f36b5a7e-2eb6-52ac-a5c9-90a8a000c90a", 00:18:59.579 "is_configured": true, 00:18:59.579 "data_offset": 2048, 00:18:59.579 "data_size": 63488 00:18:59.579 }, 00:18:59.579 { 00:18:59.579 "name": "BaseBdev4", 00:18:59.579 "uuid": "40bbb04e-0bb8-566e-88ad-22c47ead8d69", 00:18:59.579 "is_configured": true, 00:18:59.579 "data_offset": 2048, 00:18:59.579 "data_size": 63488 00:18:59.579 } 00:18:59.579 ] 00:18:59.579 }' 00:18:59.579 06:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.579 06:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.146 06:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:00.146 06:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.146 06:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.146 [2024-12-06 06:45:18.670620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:00.146 [2024-12-06 06:45:18.685143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:19:00.146 06:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.146 06:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:00.146 [2024-12-06 06:45:18.687865] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:01.185 06:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:01.185 06:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:01.185 06:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:01.185 06:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:01.185 06:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:01.185 06:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.185 06:45:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.185 06:45:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.185 06:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.185 06:45:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.185 06:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:01.185 "name": "raid_bdev1", 00:19:01.185 "uuid": "dbc1be9f-19cf-4cc3-a57d-a613902ab9cf", 00:19:01.185 "strip_size_kb": 0, 00:19:01.185 "state": "online", 00:19:01.185 "raid_level": "raid1", 00:19:01.185 "superblock": true, 00:19:01.185 "num_base_bdevs": 4, 00:19:01.186 "num_base_bdevs_discovered": 4, 00:19:01.186 "num_base_bdevs_operational": 4, 00:19:01.186 "process": { 00:19:01.186 "type": "rebuild", 00:19:01.186 "target": "spare", 00:19:01.186 "progress": { 00:19:01.186 "blocks": 20480, 00:19:01.186 "percent": 32 00:19:01.186 } 00:19:01.186 }, 00:19:01.186 "base_bdevs_list": [ 00:19:01.186 { 00:19:01.186 "name": "spare", 00:19:01.186 "uuid": "0c31bcad-4fc4-565e-bade-16bd14bb1df5", 00:19:01.186 "is_configured": true, 00:19:01.186 "data_offset": 2048, 00:19:01.186 "data_size": 63488 00:19:01.186 }, 00:19:01.186 { 00:19:01.186 "name": "BaseBdev2", 00:19:01.186 "uuid": "25231058-5899-5a32-9512-61b61798037a", 00:19:01.186 "is_configured": true, 00:19:01.186 "data_offset": 2048, 00:19:01.186 "data_size": 63488 00:19:01.186 }, 00:19:01.186 { 00:19:01.186 "name": "BaseBdev3", 00:19:01.186 "uuid": "f36b5a7e-2eb6-52ac-a5c9-90a8a000c90a", 00:19:01.186 "is_configured": true, 00:19:01.186 "data_offset": 2048, 00:19:01.186 "data_size": 63488 00:19:01.186 }, 00:19:01.186 { 00:19:01.186 "name": "BaseBdev4", 00:19:01.186 "uuid": "40bbb04e-0bb8-566e-88ad-22c47ead8d69", 00:19:01.186 "is_configured": true, 00:19:01.186 "data_offset": 2048, 00:19:01.186 "data_size": 63488 00:19:01.186 } 00:19:01.186 ] 00:19:01.186 }' 00:19:01.186 06:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:01.186 06:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:01.186 06:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.445 06:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:01.445 06:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:01.445 06:45:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.445 06:45:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.445 [2024-12-06 06:45:19.869082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:01.445 [2024-12-06 06:45:19.897001] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:01.445 [2024-12-06 06:45:19.897112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.445 [2024-12-06 06:45:19.897140] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:01.445 [2024-12-06 06:45:19.897156] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:01.445 06:45:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.445 06:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:01.445 06:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.445 06:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.445 06:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.445 06:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.445 06:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:01.445 06:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.445 06:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.445 06:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.445 06:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.445 06:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.445 06:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.445 06:45:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.445 06:45:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.445 06:45:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.445 06:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.445 "name": "raid_bdev1", 00:19:01.445 "uuid": "dbc1be9f-19cf-4cc3-a57d-a613902ab9cf", 00:19:01.445 "strip_size_kb": 0, 00:19:01.445 "state": "online", 00:19:01.445 "raid_level": "raid1", 00:19:01.445 "superblock": true, 00:19:01.445 "num_base_bdevs": 4, 00:19:01.445 "num_base_bdevs_discovered": 3, 00:19:01.445 "num_base_bdevs_operational": 3, 00:19:01.445 "base_bdevs_list": [ 00:19:01.445 { 00:19:01.445 "name": null, 00:19:01.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.445 "is_configured": false, 00:19:01.445 "data_offset": 0, 00:19:01.445 "data_size": 63488 00:19:01.445 }, 00:19:01.445 { 00:19:01.445 "name": "BaseBdev2", 00:19:01.445 "uuid": "25231058-5899-5a32-9512-61b61798037a", 00:19:01.445 "is_configured": true, 00:19:01.445 "data_offset": 2048, 00:19:01.445 "data_size": 63488 00:19:01.445 }, 00:19:01.445 { 00:19:01.445 "name": "BaseBdev3", 00:19:01.445 "uuid": "f36b5a7e-2eb6-52ac-a5c9-90a8a000c90a", 00:19:01.445 "is_configured": true, 00:19:01.445 "data_offset": 2048, 00:19:01.445 "data_size": 63488 00:19:01.445 }, 00:19:01.445 { 00:19:01.445 "name": "BaseBdev4", 00:19:01.445 "uuid": "40bbb04e-0bb8-566e-88ad-22c47ead8d69", 00:19:01.445 "is_configured": true, 00:19:01.445 "data_offset": 2048, 00:19:01.445 "data_size": 63488 00:19:01.445 } 00:19:01.445 ] 00:19:01.445 }' 00:19:01.445 06:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.445 06:45:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.013 06:45:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:02.013 06:45:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:02.013 06:45:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:02.013 06:45:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:02.013 06:45:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:02.013 06:45:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.013 06:45:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.013 06:45:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.013 06:45:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.013 06:45:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.013 06:45:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:02.013 "name": "raid_bdev1", 00:19:02.013 "uuid": "dbc1be9f-19cf-4cc3-a57d-a613902ab9cf", 00:19:02.013 "strip_size_kb": 0, 00:19:02.013 "state": "online", 00:19:02.013 "raid_level": "raid1", 00:19:02.013 "superblock": true, 00:19:02.013 "num_base_bdevs": 4, 00:19:02.013 "num_base_bdevs_discovered": 3, 00:19:02.013 "num_base_bdevs_operational": 3, 00:19:02.013 "base_bdevs_list": [ 00:19:02.013 { 00:19:02.013 "name": null, 00:19:02.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.013 "is_configured": false, 00:19:02.013 "data_offset": 0, 00:19:02.013 "data_size": 63488 00:19:02.013 }, 00:19:02.013 { 00:19:02.013 "name": "BaseBdev2", 00:19:02.013 "uuid": "25231058-5899-5a32-9512-61b61798037a", 00:19:02.013 "is_configured": true, 00:19:02.013 "data_offset": 2048, 00:19:02.013 "data_size": 63488 00:19:02.013 }, 00:19:02.013 { 00:19:02.013 "name": "BaseBdev3", 00:19:02.013 "uuid": "f36b5a7e-2eb6-52ac-a5c9-90a8a000c90a", 00:19:02.013 "is_configured": true, 00:19:02.013 "data_offset": 2048, 00:19:02.013 "data_size": 63488 00:19:02.013 }, 00:19:02.013 { 00:19:02.013 "name": "BaseBdev4", 00:19:02.013 "uuid": "40bbb04e-0bb8-566e-88ad-22c47ead8d69", 00:19:02.013 "is_configured": true, 00:19:02.013 "data_offset": 2048, 00:19:02.013 "data_size": 63488 00:19:02.013 } 00:19:02.013 ] 00:19:02.013 }' 00:19:02.013 06:45:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:02.013 06:45:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:02.013 06:45:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:02.013 06:45:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:02.013 06:45:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:02.013 06:45:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.013 06:45:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.013 [2024-12-06 06:45:20.600997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:02.013 [2024-12-06 06:45:20.614734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:19:02.013 06:45:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.013 06:45:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:02.013 [2024-12-06 06:45:20.617359] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:03.392 "name": "raid_bdev1", 00:19:03.392 "uuid": "dbc1be9f-19cf-4cc3-a57d-a613902ab9cf", 00:19:03.392 "strip_size_kb": 0, 00:19:03.392 "state": "online", 00:19:03.392 "raid_level": "raid1", 00:19:03.392 "superblock": true, 00:19:03.392 "num_base_bdevs": 4, 00:19:03.392 "num_base_bdevs_discovered": 4, 00:19:03.392 "num_base_bdevs_operational": 4, 00:19:03.392 "process": { 00:19:03.392 "type": "rebuild", 00:19:03.392 "target": "spare", 00:19:03.392 "progress": { 00:19:03.392 "blocks": 20480, 00:19:03.392 "percent": 32 00:19:03.392 } 00:19:03.392 }, 00:19:03.392 "base_bdevs_list": [ 00:19:03.392 { 00:19:03.392 "name": "spare", 00:19:03.392 "uuid": "0c31bcad-4fc4-565e-bade-16bd14bb1df5", 00:19:03.392 "is_configured": true, 00:19:03.392 "data_offset": 2048, 00:19:03.392 "data_size": 63488 00:19:03.392 }, 00:19:03.392 { 00:19:03.392 "name": "BaseBdev2", 00:19:03.392 "uuid": "25231058-5899-5a32-9512-61b61798037a", 00:19:03.392 "is_configured": true, 00:19:03.392 "data_offset": 2048, 00:19:03.392 "data_size": 63488 00:19:03.392 }, 00:19:03.392 { 00:19:03.392 "name": "BaseBdev3", 00:19:03.392 "uuid": "f36b5a7e-2eb6-52ac-a5c9-90a8a000c90a", 00:19:03.392 "is_configured": true, 00:19:03.392 "data_offset": 2048, 00:19:03.392 "data_size": 63488 00:19:03.392 }, 00:19:03.392 { 00:19:03.392 "name": "BaseBdev4", 00:19:03.392 "uuid": "40bbb04e-0bb8-566e-88ad-22c47ead8d69", 00:19:03.392 "is_configured": true, 00:19:03.392 "data_offset": 2048, 00:19:03.392 "data_size": 63488 00:19:03.392 } 00:19:03.392 ] 00:19:03.392 }' 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:03.392 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.392 [2024-12-06 06:45:21.774627] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:03.392 [2024-12-06 06:45:21.926771] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.392 06:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:03.392 "name": "raid_bdev1", 00:19:03.392 "uuid": "dbc1be9f-19cf-4cc3-a57d-a613902ab9cf", 00:19:03.392 "strip_size_kb": 0, 00:19:03.392 "state": "online", 00:19:03.392 "raid_level": "raid1", 00:19:03.392 "superblock": true, 00:19:03.392 "num_base_bdevs": 4, 00:19:03.392 "num_base_bdevs_discovered": 3, 00:19:03.392 "num_base_bdevs_operational": 3, 00:19:03.392 "process": { 00:19:03.392 "type": "rebuild", 00:19:03.392 "target": "spare", 00:19:03.392 "progress": { 00:19:03.392 "blocks": 24576, 00:19:03.392 "percent": 38 00:19:03.392 } 00:19:03.392 }, 00:19:03.392 "base_bdevs_list": [ 00:19:03.393 { 00:19:03.393 "name": "spare", 00:19:03.393 "uuid": "0c31bcad-4fc4-565e-bade-16bd14bb1df5", 00:19:03.393 "is_configured": true, 00:19:03.393 "data_offset": 2048, 00:19:03.393 "data_size": 63488 00:19:03.393 }, 00:19:03.393 { 00:19:03.393 "name": null, 00:19:03.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.393 "is_configured": false, 00:19:03.393 "data_offset": 0, 00:19:03.393 "data_size": 63488 00:19:03.393 }, 00:19:03.393 { 00:19:03.393 "name": "BaseBdev3", 00:19:03.393 "uuid": "f36b5a7e-2eb6-52ac-a5c9-90a8a000c90a", 00:19:03.393 "is_configured": true, 00:19:03.393 "data_offset": 2048, 00:19:03.393 "data_size": 63488 00:19:03.393 }, 00:19:03.393 { 00:19:03.393 "name": "BaseBdev4", 00:19:03.393 "uuid": "40bbb04e-0bb8-566e-88ad-22c47ead8d69", 00:19:03.393 "is_configured": true, 00:19:03.393 "data_offset": 2048, 00:19:03.393 "data_size": 63488 00:19:03.393 } 00:19:03.393 ] 00:19:03.393 }' 00:19:03.393 06:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:03.393 06:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:03.393 06:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:03.652 06:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:03.652 06:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=502 00:19:03.652 06:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:03.652 06:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:03.652 06:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:03.652 06:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:03.652 06:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:03.652 06:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:03.652 06:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.652 06:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.652 06:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.652 06:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.652 06:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.652 06:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:03.652 "name": "raid_bdev1", 00:19:03.652 "uuid": "dbc1be9f-19cf-4cc3-a57d-a613902ab9cf", 00:19:03.652 "strip_size_kb": 0, 00:19:03.652 "state": "online", 00:19:03.652 "raid_level": "raid1", 00:19:03.652 "superblock": true, 00:19:03.652 "num_base_bdevs": 4, 00:19:03.652 "num_base_bdevs_discovered": 3, 00:19:03.652 "num_base_bdevs_operational": 3, 00:19:03.652 "process": { 00:19:03.652 "type": "rebuild", 00:19:03.652 "target": "spare", 00:19:03.652 "progress": { 00:19:03.652 "blocks": 26624, 00:19:03.652 "percent": 41 00:19:03.652 } 00:19:03.652 }, 00:19:03.652 "base_bdevs_list": [ 00:19:03.652 { 00:19:03.652 "name": "spare", 00:19:03.652 "uuid": "0c31bcad-4fc4-565e-bade-16bd14bb1df5", 00:19:03.652 "is_configured": true, 00:19:03.652 "data_offset": 2048, 00:19:03.652 "data_size": 63488 00:19:03.652 }, 00:19:03.652 { 00:19:03.652 "name": null, 00:19:03.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.652 "is_configured": false, 00:19:03.652 "data_offset": 0, 00:19:03.652 "data_size": 63488 00:19:03.652 }, 00:19:03.652 { 00:19:03.652 "name": "BaseBdev3", 00:19:03.652 "uuid": "f36b5a7e-2eb6-52ac-a5c9-90a8a000c90a", 00:19:03.652 "is_configured": true, 00:19:03.652 "data_offset": 2048, 00:19:03.652 "data_size": 63488 00:19:03.652 }, 00:19:03.652 { 00:19:03.652 "name": "BaseBdev4", 00:19:03.652 "uuid": "40bbb04e-0bb8-566e-88ad-22c47ead8d69", 00:19:03.652 "is_configured": true, 00:19:03.652 "data_offset": 2048, 00:19:03.652 "data_size": 63488 00:19:03.652 } 00:19:03.652 ] 00:19:03.652 }' 00:19:03.652 06:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:03.652 06:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:03.652 06:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:03.652 06:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:03.652 06:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:04.662 06:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:04.662 06:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:04.662 06:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:04.662 06:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:04.662 06:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:04.662 06:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:04.662 06:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.662 06:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.662 06:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.662 06:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.662 06:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.662 06:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:04.662 "name": "raid_bdev1", 00:19:04.662 "uuid": "dbc1be9f-19cf-4cc3-a57d-a613902ab9cf", 00:19:04.662 "strip_size_kb": 0, 00:19:04.662 "state": "online", 00:19:04.662 "raid_level": "raid1", 00:19:04.662 "superblock": true, 00:19:04.662 "num_base_bdevs": 4, 00:19:04.662 "num_base_bdevs_discovered": 3, 00:19:04.662 "num_base_bdevs_operational": 3, 00:19:04.662 "process": { 00:19:04.662 "type": "rebuild", 00:19:04.662 "target": "spare", 00:19:04.662 "progress": { 00:19:04.662 "blocks": 51200, 00:19:04.662 "percent": 80 00:19:04.662 } 00:19:04.662 }, 00:19:04.662 "base_bdevs_list": [ 00:19:04.662 { 00:19:04.662 "name": "spare", 00:19:04.662 "uuid": "0c31bcad-4fc4-565e-bade-16bd14bb1df5", 00:19:04.662 "is_configured": true, 00:19:04.662 "data_offset": 2048, 00:19:04.662 "data_size": 63488 00:19:04.662 }, 00:19:04.662 { 00:19:04.662 "name": null, 00:19:04.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.662 "is_configured": false, 00:19:04.662 "data_offset": 0, 00:19:04.662 "data_size": 63488 00:19:04.662 }, 00:19:04.662 { 00:19:04.662 "name": "BaseBdev3", 00:19:04.662 "uuid": "f36b5a7e-2eb6-52ac-a5c9-90a8a000c90a", 00:19:04.662 "is_configured": true, 00:19:04.662 "data_offset": 2048, 00:19:04.662 "data_size": 63488 00:19:04.662 }, 00:19:04.662 { 00:19:04.662 "name": "BaseBdev4", 00:19:04.662 "uuid": "40bbb04e-0bb8-566e-88ad-22c47ead8d69", 00:19:04.662 "is_configured": true, 00:19:04.662 "data_offset": 2048, 00:19:04.662 "data_size": 63488 00:19:04.662 } 00:19:04.662 ] 00:19:04.662 }' 00:19:04.922 06:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:04.922 06:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:04.922 06:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:04.922 06:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:04.922 06:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:05.489 [2024-12-06 06:45:23.840946] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:05.489 [2024-12-06 06:45:23.841044] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:05.489 [2024-12-06 06:45:23.841227] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:06.057 "name": "raid_bdev1", 00:19:06.057 "uuid": "dbc1be9f-19cf-4cc3-a57d-a613902ab9cf", 00:19:06.057 "strip_size_kb": 0, 00:19:06.057 "state": "online", 00:19:06.057 "raid_level": "raid1", 00:19:06.057 "superblock": true, 00:19:06.057 "num_base_bdevs": 4, 00:19:06.057 "num_base_bdevs_discovered": 3, 00:19:06.057 "num_base_bdevs_operational": 3, 00:19:06.057 "base_bdevs_list": [ 00:19:06.057 { 00:19:06.057 "name": "spare", 00:19:06.057 "uuid": "0c31bcad-4fc4-565e-bade-16bd14bb1df5", 00:19:06.057 "is_configured": true, 00:19:06.057 "data_offset": 2048, 00:19:06.057 "data_size": 63488 00:19:06.057 }, 00:19:06.057 { 00:19:06.057 "name": null, 00:19:06.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.057 "is_configured": false, 00:19:06.057 "data_offset": 0, 00:19:06.057 "data_size": 63488 00:19:06.057 }, 00:19:06.057 { 00:19:06.057 "name": "BaseBdev3", 00:19:06.057 "uuid": "f36b5a7e-2eb6-52ac-a5c9-90a8a000c90a", 00:19:06.057 "is_configured": true, 00:19:06.057 "data_offset": 2048, 00:19:06.057 "data_size": 63488 00:19:06.057 }, 00:19:06.057 { 00:19:06.057 "name": "BaseBdev4", 00:19:06.057 "uuid": "40bbb04e-0bb8-566e-88ad-22c47ead8d69", 00:19:06.057 "is_configured": true, 00:19:06.057 "data_offset": 2048, 00:19:06.057 "data_size": 63488 00:19:06.057 } 00:19:06.057 ] 00:19:06.057 }' 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:06.057 "name": "raid_bdev1", 00:19:06.057 "uuid": "dbc1be9f-19cf-4cc3-a57d-a613902ab9cf", 00:19:06.057 "strip_size_kb": 0, 00:19:06.057 "state": "online", 00:19:06.057 "raid_level": "raid1", 00:19:06.057 "superblock": true, 00:19:06.057 "num_base_bdevs": 4, 00:19:06.057 "num_base_bdevs_discovered": 3, 00:19:06.057 "num_base_bdevs_operational": 3, 00:19:06.057 "base_bdevs_list": [ 00:19:06.057 { 00:19:06.057 "name": "spare", 00:19:06.057 "uuid": "0c31bcad-4fc4-565e-bade-16bd14bb1df5", 00:19:06.057 "is_configured": true, 00:19:06.057 "data_offset": 2048, 00:19:06.057 "data_size": 63488 00:19:06.057 }, 00:19:06.057 { 00:19:06.057 "name": null, 00:19:06.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.057 "is_configured": false, 00:19:06.057 "data_offset": 0, 00:19:06.057 "data_size": 63488 00:19:06.057 }, 00:19:06.057 { 00:19:06.057 "name": "BaseBdev3", 00:19:06.057 "uuid": "f36b5a7e-2eb6-52ac-a5c9-90a8a000c90a", 00:19:06.057 "is_configured": true, 00:19:06.057 "data_offset": 2048, 00:19:06.057 "data_size": 63488 00:19:06.057 }, 00:19:06.057 { 00:19:06.057 "name": "BaseBdev4", 00:19:06.057 "uuid": "40bbb04e-0bb8-566e-88ad-22c47ead8d69", 00:19:06.057 "is_configured": true, 00:19:06.057 "data_offset": 2048, 00:19:06.057 "data_size": 63488 00:19:06.057 } 00:19:06.057 ] 00:19:06.057 }' 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:06.057 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:06.317 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:06.317 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:06.317 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.317 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.318 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.318 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.318 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:06.318 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.318 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.318 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.318 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.318 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.318 06:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.318 06:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.318 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.318 06:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.318 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.318 "name": "raid_bdev1", 00:19:06.318 "uuid": "dbc1be9f-19cf-4cc3-a57d-a613902ab9cf", 00:19:06.318 "strip_size_kb": 0, 00:19:06.318 "state": "online", 00:19:06.318 "raid_level": "raid1", 00:19:06.318 "superblock": true, 00:19:06.318 "num_base_bdevs": 4, 00:19:06.318 "num_base_bdevs_discovered": 3, 00:19:06.318 "num_base_bdevs_operational": 3, 00:19:06.318 "base_bdevs_list": [ 00:19:06.318 { 00:19:06.318 "name": "spare", 00:19:06.318 "uuid": "0c31bcad-4fc4-565e-bade-16bd14bb1df5", 00:19:06.318 "is_configured": true, 00:19:06.318 "data_offset": 2048, 00:19:06.318 "data_size": 63488 00:19:06.318 }, 00:19:06.318 { 00:19:06.318 "name": null, 00:19:06.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.318 "is_configured": false, 00:19:06.318 "data_offset": 0, 00:19:06.318 "data_size": 63488 00:19:06.318 }, 00:19:06.318 { 00:19:06.318 "name": "BaseBdev3", 00:19:06.318 "uuid": "f36b5a7e-2eb6-52ac-a5c9-90a8a000c90a", 00:19:06.318 "is_configured": true, 00:19:06.318 "data_offset": 2048, 00:19:06.318 "data_size": 63488 00:19:06.318 }, 00:19:06.318 { 00:19:06.318 "name": "BaseBdev4", 00:19:06.318 "uuid": "40bbb04e-0bb8-566e-88ad-22c47ead8d69", 00:19:06.318 "is_configured": true, 00:19:06.318 "data_offset": 2048, 00:19:06.318 "data_size": 63488 00:19:06.318 } 00:19:06.318 ] 00:19:06.318 }' 00:19:06.318 06:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.318 06:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.890 06:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:06.890 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.890 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.890 [2024-12-06 06:45:25.249184] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:06.890 [2024-12-06 06:45:25.249223] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:06.890 [2024-12-06 06:45:25.249325] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:06.890 [2024-12-06 06:45:25.249459] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:06.890 [2024-12-06 06:45:25.249477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:06.890 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.890 06:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.890 06:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:19:06.890 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.890 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.890 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.890 06:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:06.890 06:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:06.890 06:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:06.890 06:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:06.890 06:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:06.890 06:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:06.890 06:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:06.890 06:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:06.890 06:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:06.890 06:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:06.891 06:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:06.891 06:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:06.891 06:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:07.149 /dev/nbd0 00:19:07.149 06:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:07.149 06:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:07.149 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:07.149 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:07.149 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:07.149 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:07.149 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:07.149 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:07.149 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:07.149 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:07.149 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:07.149 1+0 records in 00:19:07.149 1+0 records out 00:19:07.149 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178778 s, 22.9 MB/s 00:19:07.149 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:07.149 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:07.149 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:07.149 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:07.149 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:07.149 06:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:07.149 06:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:07.149 06:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:07.406 /dev/nbd1 00:19:07.406 06:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:07.406 06:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:07.406 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:07.406 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:07.406 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:07.406 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:07.406 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:07.406 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:07.406 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:07.406 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:07.407 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:07.407 1+0 records in 00:19:07.407 1+0 records out 00:19:07.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000512593 s, 8.0 MB/s 00:19:07.407 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:07.407 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:07.407 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:07.407 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:07.407 06:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:07.407 06:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:07.407 06:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:07.407 06:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:07.733 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:07.733 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:07.733 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:07.733 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:07.733 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:07.733 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:07.733 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:07.991 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:07.992 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:07.992 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:07.992 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:07.992 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:07.992 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:07.992 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:07.992 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:07.992 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:07.992 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:08.250 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:08.250 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:08.250 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:08.250 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:08.250 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:08.250 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:08.250 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:08.250 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:08.250 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:08.250 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:08.250 06:45:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.250 06:45:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.250 06:45:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.250 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:08.250 06:45:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.250 06:45:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.250 [2024-12-06 06:45:26.790124] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:08.250 [2024-12-06 06:45:26.790185] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.250 [2024-12-06 06:45:26.790218] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:19:08.250 [2024-12-06 06:45:26.790234] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.250 [2024-12-06 06:45:26.793138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.250 [2024-12-06 06:45:26.793199] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:08.250 [2024-12-06 06:45:26.793326] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:08.250 [2024-12-06 06:45:26.793391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:08.250 [2024-12-06 06:45:26.793585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:08.250 [2024-12-06 06:45:26.793732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:08.250 spare 00:19:08.250 06:45:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.250 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:08.250 06:45:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.250 06:45:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.250 [2024-12-06 06:45:26.893868] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:08.250 [2024-12-06 06:45:26.893912] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:08.250 [2024-12-06 06:45:26.894334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:19:08.250 [2024-12-06 06:45:26.894628] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:08.250 [2024-12-06 06:45:26.894656] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:08.250 [2024-12-06 06:45:26.894884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.509 06:45:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.509 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:08.509 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:08.509 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:08.509 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:08.509 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:08.509 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:08.509 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.509 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.509 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.509 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.509 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.509 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.509 06:45:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.509 06:45:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.509 06:45:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.509 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.509 "name": "raid_bdev1", 00:19:08.509 "uuid": "dbc1be9f-19cf-4cc3-a57d-a613902ab9cf", 00:19:08.509 "strip_size_kb": 0, 00:19:08.509 "state": "online", 00:19:08.509 "raid_level": "raid1", 00:19:08.509 "superblock": true, 00:19:08.509 "num_base_bdevs": 4, 00:19:08.509 "num_base_bdevs_discovered": 3, 00:19:08.509 "num_base_bdevs_operational": 3, 00:19:08.509 "base_bdevs_list": [ 00:19:08.509 { 00:19:08.509 "name": "spare", 00:19:08.509 "uuid": "0c31bcad-4fc4-565e-bade-16bd14bb1df5", 00:19:08.509 "is_configured": true, 00:19:08.509 "data_offset": 2048, 00:19:08.509 "data_size": 63488 00:19:08.509 }, 00:19:08.510 { 00:19:08.510 "name": null, 00:19:08.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.510 "is_configured": false, 00:19:08.510 "data_offset": 2048, 00:19:08.510 "data_size": 63488 00:19:08.510 }, 00:19:08.510 { 00:19:08.510 "name": "BaseBdev3", 00:19:08.510 "uuid": "f36b5a7e-2eb6-52ac-a5c9-90a8a000c90a", 00:19:08.510 "is_configured": true, 00:19:08.510 "data_offset": 2048, 00:19:08.510 "data_size": 63488 00:19:08.510 }, 00:19:08.510 { 00:19:08.510 "name": "BaseBdev4", 00:19:08.510 "uuid": "40bbb04e-0bb8-566e-88ad-22c47ead8d69", 00:19:08.510 "is_configured": true, 00:19:08.510 "data_offset": 2048, 00:19:08.510 "data_size": 63488 00:19:08.510 } 00:19:08.510 ] 00:19:08.510 }' 00:19:08.510 06:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.510 06:45:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.078 06:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:09.078 06:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:09.078 06:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:09.078 06:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:09.078 06:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:09.078 06:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.078 06:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.078 06:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.078 06:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.078 06:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.078 06:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:09.078 "name": "raid_bdev1", 00:19:09.078 "uuid": "dbc1be9f-19cf-4cc3-a57d-a613902ab9cf", 00:19:09.078 "strip_size_kb": 0, 00:19:09.078 "state": "online", 00:19:09.078 "raid_level": "raid1", 00:19:09.078 "superblock": true, 00:19:09.078 "num_base_bdevs": 4, 00:19:09.078 "num_base_bdevs_discovered": 3, 00:19:09.078 "num_base_bdevs_operational": 3, 00:19:09.078 "base_bdevs_list": [ 00:19:09.078 { 00:19:09.078 "name": "spare", 00:19:09.078 "uuid": "0c31bcad-4fc4-565e-bade-16bd14bb1df5", 00:19:09.078 "is_configured": true, 00:19:09.078 "data_offset": 2048, 00:19:09.078 "data_size": 63488 00:19:09.078 }, 00:19:09.078 { 00:19:09.078 "name": null, 00:19:09.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.078 "is_configured": false, 00:19:09.078 "data_offset": 2048, 00:19:09.078 "data_size": 63488 00:19:09.078 }, 00:19:09.078 { 00:19:09.078 "name": "BaseBdev3", 00:19:09.078 "uuid": "f36b5a7e-2eb6-52ac-a5c9-90a8a000c90a", 00:19:09.078 "is_configured": true, 00:19:09.078 "data_offset": 2048, 00:19:09.078 "data_size": 63488 00:19:09.078 }, 00:19:09.078 { 00:19:09.078 "name": "BaseBdev4", 00:19:09.078 "uuid": "40bbb04e-0bb8-566e-88ad-22c47ead8d69", 00:19:09.078 "is_configured": true, 00:19:09.078 "data_offset": 2048, 00:19:09.078 "data_size": 63488 00:19:09.078 } 00:19:09.079 ] 00:19:09.079 }' 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.079 [2024-12-06 06:45:27.643124] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.079 "name": "raid_bdev1", 00:19:09.079 "uuid": "dbc1be9f-19cf-4cc3-a57d-a613902ab9cf", 00:19:09.079 "strip_size_kb": 0, 00:19:09.079 "state": "online", 00:19:09.079 "raid_level": "raid1", 00:19:09.079 "superblock": true, 00:19:09.079 "num_base_bdevs": 4, 00:19:09.079 "num_base_bdevs_discovered": 2, 00:19:09.079 "num_base_bdevs_operational": 2, 00:19:09.079 "base_bdevs_list": [ 00:19:09.079 { 00:19:09.079 "name": null, 00:19:09.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.079 "is_configured": false, 00:19:09.079 "data_offset": 0, 00:19:09.079 "data_size": 63488 00:19:09.079 }, 00:19:09.079 { 00:19:09.079 "name": null, 00:19:09.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.079 "is_configured": false, 00:19:09.079 "data_offset": 2048, 00:19:09.079 "data_size": 63488 00:19:09.079 }, 00:19:09.079 { 00:19:09.079 "name": "BaseBdev3", 00:19:09.079 "uuid": "f36b5a7e-2eb6-52ac-a5c9-90a8a000c90a", 00:19:09.079 "is_configured": true, 00:19:09.079 "data_offset": 2048, 00:19:09.079 "data_size": 63488 00:19:09.079 }, 00:19:09.079 { 00:19:09.079 "name": "BaseBdev4", 00:19:09.079 "uuid": "40bbb04e-0bb8-566e-88ad-22c47ead8d69", 00:19:09.079 "is_configured": true, 00:19:09.079 "data_offset": 2048, 00:19:09.079 "data_size": 63488 00:19:09.079 } 00:19:09.079 ] 00:19:09.079 }' 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.079 06:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.646 06:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:09.646 06:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.646 06:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.646 [2024-12-06 06:45:28.163280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:09.646 [2024-12-06 06:45:28.163546] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:19:09.646 [2024-12-06 06:45:28.163570] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:09.646 [2024-12-06 06:45:28.163628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:09.646 [2024-12-06 06:45:28.177182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:19:09.646 06:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.646 06:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:09.646 [2024-12-06 06:45:28.179693] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:10.583 06:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:10.583 06:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:10.583 06:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:10.583 06:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:10.583 06:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:10.583 06:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.583 06:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.583 06:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.583 06:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.583 06:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.842 06:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:10.842 "name": "raid_bdev1", 00:19:10.842 "uuid": "dbc1be9f-19cf-4cc3-a57d-a613902ab9cf", 00:19:10.842 "strip_size_kb": 0, 00:19:10.842 "state": "online", 00:19:10.842 "raid_level": "raid1", 00:19:10.842 "superblock": true, 00:19:10.842 "num_base_bdevs": 4, 00:19:10.842 "num_base_bdevs_discovered": 3, 00:19:10.842 "num_base_bdevs_operational": 3, 00:19:10.842 "process": { 00:19:10.842 "type": "rebuild", 00:19:10.842 "target": "spare", 00:19:10.842 "progress": { 00:19:10.842 "blocks": 20480, 00:19:10.842 "percent": 32 00:19:10.842 } 00:19:10.842 }, 00:19:10.842 "base_bdevs_list": [ 00:19:10.842 { 00:19:10.842 "name": "spare", 00:19:10.842 "uuid": "0c31bcad-4fc4-565e-bade-16bd14bb1df5", 00:19:10.842 "is_configured": true, 00:19:10.842 "data_offset": 2048, 00:19:10.842 "data_size": 63488 00:19:10.842 }, 00:19:10.842 { 00:19:10.842 "name": null, 00:19:10.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.842 "is_configured": false, 00:19:10.842 "data_offset": 2048, 00:19:10.842 "data_size": 63488 00:19:10.842 }, 00:19:10.842 { 00:19:10.842 "name": "BaseBdev3", 00:19:10.842 "uuid": "f36b5a7e-2eb6-52ac-a5c9-90a8a000c90a", 00:19:10.842 "is_configured": true, 00:19:10.842 "data_offset": 2048, 00:19:10.842 "data_size": 63488 00:19:10.842 }, 00:19:10.842 { 00:19:10.842 "name": "BaseBdev4", 00:19:10.842 "uuid": "40bbb04e-0bb8-566e-88ad-22c47ead8d69", 00:19:10.842 "is_configured": true, 00:19:10.842 "data_offset": 2048, 00:19:10.842 "data_size": 63488 00:19:10.842 } 00:19:10.842 ] 00:19:10.842 }' 00:19:10.842 06:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:10.842 06:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:10.843 06:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:10.843 06:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:10.843 06:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:10.843 06:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.843 06:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.843 [2024-12-06 06:45:29.336861] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:10.843 [2024-12-06 06:45:29.388811] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:10.843 [2024-12-06 06:45:29.388902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:10.843 [2024-12-06 06:45:29.388932] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:10.843 [2024-12-06 06:45:29.388944] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:10.843 06:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.843 06:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:10.843 06:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:10.843 06:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:10.843 06:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:10.843 06:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:10.843 06:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:10.843 06:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.843 06:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.843 06:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.843 06:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.843 06:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.843 06:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.843 06:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.843 06:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.843 06:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.843 06:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.843 "name": "raid_bdev1", 00:19:10.843 "uuid": "dbc1be9f-19cf-4cc3-a57d-a613902ab9cf", 00:19:10.843 "strip_size_kb": 0, 00:19:10.843 "state": "online", 00:19:10.843 "raid_level": "raid1", 00:19:10.843 "superblock": true, 00:19:10.843 "num_base_bdevs": 4, 00:19:10.843 "num_base_bdevs_discovered": 2, 00:19:10.843 "num_base_bdevs_operational": 2, 00:19:10.843 "base_bdevs_list": [ 00:19:10.843 { 00:19:10.843 "name": null, 00:19:10.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.843 "is_configured": false, 00:19:10.843 "data_offset": 0, 00:19:10.843 "data_size": 63488 00:19:10.843 }, 00:19:10.843 { 00:19:10.843 "name": null, 00:19:10.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.843 "is_configured": false, 00:19:10.843 "data_offset": 2048, 00:19:10.843 "data_size": 63488 00:19:10.843 }, 00:19:10.843 { 00:19:10.843 "name": "BaseBdev3", 00:19:10.843 "uuid": "f36b5a7e-2eb6-52ac-a5c9-90a8a000c90a", 00:19:10.843 "is_configured": true, 00:19:10.843 "data_offset": 2048, 00:19:10.843 "data_size": 63488 00:19:10.843 }, 00:19:10.843 { 00:19:10.843 "name": "BaseBdev4", 00:19:10.843 "uuid": "40bbb04e-0bb8-566e-88ad-22c47ead8d69", 00:19:10.843 "is_configured": true, 00:19:10.843 "data_offset": 2048, 00:19:10.843 "data_size": 63488 00:19:10.843 } 00:19:10.843 ] 00:19:10.843 }' 00:19:10.843 06:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.843 06:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.434 06:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:11.434 06:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.434 06:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.434 [2024-12-06 06:45:29.921262] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:11.434 [2024-12-06 06:45:29.921344] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.434 [2024-12-06 06:45:29.921395] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:19:11.434 [2024-12-06 06:45:29.921411] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.434 [2024-12-06 06:45:29.922036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.434 [2024-12-06 06:45:29.922076] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:11.434 [2024-12-06 06:45:29.922200] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:11.434 [2024-12-06 06:45:29.922219] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:19:11.434 [2024-12-06 06:45:29.922239] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:11.434 [2024-12-06 06:45:29.922281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:11.434 [2024-12-06 06:45:29.935866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:19:11.434 spare 00:19:11.434 06:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.434 06:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:11.434 [2024-12-06 06:45:29.938389] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:12.384 06:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:12.384 06:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:12.384 06:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:12.384 06:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:12.384 06:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:12.384 06:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.384 06:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.384 06:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.384 06:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.384 06:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.384 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:12.384 "name": "raid_bdev1", 00:19:12.384 "uuid": "dbc1be9f-19cf-4cc3-a57d-a613902ab9cf", 00:19:12.384 "strip_size_kb": 0, 00:19:12.384 "state": "online", 00:19:12.384 "raid_level": "raid1", 00:19:12.384 "superblock": true, 00:19:12.384 "num_base_bdevs": 4, 00:19:12.384 "num_base_bdevs_discovered": 3, 00:19:12.384 "num_base_bdevs_operational": 3, 00:19:12.384 "process": { 00:19:12.384 "type": "rebuild", 00:19:12.384 "target": "spare", 00:19:12.384 "progress": { 00:19:12.384 "blocks": 20480, 00:19:12.384 "percent": 32 00:19:12.384 } 00:19:12.384 }, 00:19:12.384 "base_bdevs_list": [ 00:19:12.384 { 00:19:12.384 "name": "spare", 00:19:12.384 "uuid": "0c31bcad-4fc4-565e-bade-16bd14bb1df5", 00:19:12.384 "is_configured": true, 00:19:12.384 "data_offset": 2048, 00:19:12.384 "data_size": 63488 00:19:12.384 }, 00:19:12.384 { 00:19:12.384 "name": null, 00:19:12.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.384 "is_configured": false, 00:19:12.384 "data_offset": 2048, 00:19:12.384 "data_size": 63488 00:19:12.384 }, 00:19:12.384 { 00:19:12.384 "name": "BaseBdev3", 00:19:12.384 "uuid": "f36b5a7e-2eb6-52ac-a5c9-90a8a000c90a", 00:19:12.384 "is_configured": true, 00:19:12.384 "data_offset": 2048, 00:19:12.384 "data_size": 63488 00:19:12.384 }, 00:19:12.384 { 00:19:12.384 "name": "BaseBdev4", 00:19:12.384 "uuid": "40bbb04e-0bb8-566e-88ad-22c47ead8d69", 00:19:12.384 "is_configured": true, 00:19:12.384 "data_offset": 2048, 00:19:12.384 "data_size": 63488 00:19:12.384 } 00:19:12.384 ] 00:19:12.384 }' 00:19:12.384 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:12.644 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:12.644 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:12.644 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:12.644 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:12.644 06:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.644 06:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.644 [2024-12-06 06:45:31.143735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:12.644 [2024-12-06 06:45:31.147669] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:12.644 [2024-12-06 06:45:31.147758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:12.644 [2024-12-06 06:45:31.147783] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:12.644 [2024-12-06 06:45:31.147798] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:12.644 06:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.644 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:12.644 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:12.644 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:12.644 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:12.644 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:12.644 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:12.644 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.644 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.644 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.644 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.644 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.644 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.644 06:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.644 06:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.644 06:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.644 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.644 "name": "raid_bdev1", 00:19:12.644 "uuid": "dbc1be9f-19cf-4cc3-a57d-a613902ab9cf", 00:19:12.644 "strip_size_kb": 0, 00:19:12.644 "state": "online", 00:19:12.644 "raid_level": "raid1", 00:19:12.644 "superblock": true, 00:19:12.644 "num_base_bdevs": 4, 00:19:12.644 "num_base_bdevs_discovered": 2, 00:19:12.644 "num_base_bdevs_operational": 2, 00:19:12.644 "base_bdevs_list": [ 00:19:12.644 { 00:19:12.644 "name": null, 00:19:12.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.644 "is_configured": false, 00:19:12.644 "data_offset": 0, 00:19:12.644 "data_size": 63488 00:19:12.644 }, 00:19:12.644 { 00:19:12.644 "name": null, 00:19:12.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.644 "is_configured": false, 00:19:12.644 "data_offset": 2048, 00:19:12.644 "data_size": 63488 00:19:12.644 }, 00:19:12.644 { 00:19:12.644 "name": "BaseBdev3", 00:19:12.644 "uuid": "f36b5a7e-2eb6-52ac-a5c9-90a8a000c90a", 00:19:12.644 "is_configured": true, 00:19:12.644 "data_offset": 2048, 00:19:12.644 "data_size": 63488 00:19:12.644 }, 00:19:12.644 { 00:19:12.644 "name": "BaseBdev4", 00:19:12.644 "uuid": "40bbb04e-0bb8-566e-88ad-22c47ead8d69", 00:19:12.644 "is_configured": true, 00:19:12.644 "data_offset": 2048, 00:19:12.644 "data_size": 63488 00:19:12.644 } 00:19:12.644 ] 00:19:12.644 }' 00:19:12.644 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.644 06:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.211 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:13.211 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.211 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:13.211 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:13.211 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.211 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.211 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.211 06:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.211 06:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.211 06:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.211 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.211 "name": "raid_bdev1", 00:19:13.211 "uuid": "dbc1be9f-19cf-4cc3-a57d-a613902ab9cf", 00:19:13.211 "strip_size_kb": 0, 00:19:13.211 "state": "online", 00:19:13.211 "raid_level": "raid1", 00:19:13.211 "superblock": true, 00:19:13.211 "num_base_bdevs": 4, 00:19:13.211 "num_base_bdevs_discovered": 2, 00:19:13.211 "num_base_bdevs_operational": 2, 00:19:13.211 "base_bdevs_list": [ 00:19:13.211 { 00:19:13.211 "name": null, 00:19:13.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.211 "is_configured": false, 00:19:13.211 "data_offset": 0, 00:19:13.211 "data_size": 63488 00:19:13.211 }, 00:19:13.211 { 00:19:13.211 "name": null, 00:19:13.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.211 "is_configured": false, 00:19:13.211 "data_offset": 2048, 00:19:13.211 "data_size": 63488 00:19:13.211 }, 00:19:13.211 { 00:19:13.211 "name": "BaseBdev3", 00:19:13.211 "uuid": "f36b5a7e-2eb6-52ac-a5c9-90a8a000c90a", 00:19:13.211 "is_configured": true, 00:19:13.211 "data_offset": 2048, 00:19:13.211 "data_size": 63488 00:19:13.211 }, 00:19:13.211 { 00:19:13.211 "name": "BaseBdev4", 00:19:13.211 "uuid": "40bbb04e-0bb8-566e-88ad-22c47ead8d69", 00:19:13.211 "is_configured": true, 00:19:13.211 "data_offset": 2048, 00:19:13.211 "data_size": 63488 00:19:13.211 } 00:19:13.211 ] 00:19:13.211 }' 00:19:13.211 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.211 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:13.211 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.211 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:13.211 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:13.211 06:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.211 06:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.211 06:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.211 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:13.211 06:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.211 06:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.211 [2024-12-06 06:45:31.839863] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:13.211 [2024-12-06 06:45:31.839954] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:13.211 [2024-12-06 06:45:31.839986] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:19:13.211 [2024-12-06 06:45:31.840004] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:13.211 [2024-12-06 06:45:31.840612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:13.211 [2024-12-06 06:45:31.840658] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:13.211 [2024-12-06 06:45:31.840767] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:13.211 [2024-12-06 06:45:31.840793] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:19:13.211 [2024-12-06 06:45:31.840805] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:13.211 [2024-12-06 06:45:31.840835] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:13.211 BaseBdev1 00:19:13.211 06:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.211 06:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:14.586 06:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:14.586 06:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:14.586 06:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:14.586 06:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:14.586 06:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:14.586 06:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:14.586 06:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.586 06:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.586 06:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.586 06:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.586 06:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.586 06:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.586 06:45:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.586 06:45:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.586 06:45:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.586 06:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.586 "name": "raid_bdev1", 00:19:14.586 "uuid": "dbc1be9f-19cf-4cc3-a57d-a613902ab9cf", 00:19:14.586 "strip_size_kb": 0, 00:19:14.586 "state": "online", 00:19:14.586 "raid_level": "raid1", 00:19:14.586 "superblock": true, 00:19:14.586 "num_base_bdevs": 4, 00:19:14.586 "num_base_bdevs_discovered": 2, 00:19:14.586 "num_base_bdevs_operational": 2, 00:19:14.586 "base_bdevs_list": [ 00:19:14.586 { 00:19:14.586 "name": null, 00:19:14.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.586 "is_configured": false, 00:19:14.586 "data_offset": 0, 00:19:14.586 "data_size": 63488 00:19:14.586 }, 00:19:14.586 { 00:19:14.586 "name": null, 00:19:14.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.586 "is_configured": false, 00:19:14.586 "data_offset": 2048, 00:19:14.586 "data_size": 63488 00:19:14.586 }, 00:19:14.586 { 00:19:14.586 "name": "BaseBdev3", 00:19:14.586 "uuid": "f36b5a7e-2eb6-52ac-a5c9-90a8a000c90a", 00:19:14.586 "is_configured": true, 00:19:14.586 "data_offset": 2048, 00:19:14.586 "data_size": 63488 00:19:14.586 }, 00:19:14.586 { 00:19:14.586 "name": "BaseBdev4", 00:19:14.586 "uuid": "40bbb04e-0bb8-566e-88ad-22c47ead8d69", 00:19:14.586 "is_configured": true, 00:19:14.586 "data_offset": 2048, 00:19:14.586 "data_size": 63488 00:19:14.586 } 00:19:14.586 ] 00:19:14.586 }' 00:19:14.586 06:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.586 06:45:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.844 06:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:14.844 06:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:14.844 06:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:14.844 06:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:14.844 06:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:14.844 06:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.844 06:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.844 06:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.844 06:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.844 06:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.844 06:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:14.844 "name": "raid_bdev1", 00:19:14.844 "uuid": "dbc1be9f-19cf-4cc3-a57d-a613902ab9cf", 00:19:14.844 "strip_size_kb": 0, 00:19:14.844 "state": "online", 00:19:14.844 "raid_level": "raid1", 00:19:14.844 "superblock": true, 00:19:14.844 "num_base_bdevs": 4, 00:19:14.844 "num_base_bdevs_discovered": 2, 00:19:14.844 "num_base_bdevs_operational": 2, 00:19:14.844 "base_bdevs_list": [ 00:19:14.844 { 00:19:14.844 "name": null, 00:19:14.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.844 "is_configured": false, 00:19:14.844 "data_offset": 0, 00:19:14.844 "data_size": 63488 00:19:14.844 }, 00:19:14.844 { 00:19:14.844 "name": null, 00:19:14.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.844 "is_configured": false, 00:19:14.844 "data_offset": 2048, 00:19:14.844 "data_size": 63488 00:19:14.844 }, 00:19:14.844 { 00:19:14.844 "name": "BaseBdev3", 00:19:14.844 "uuid": "f36b5a7e-2eb6-52ac-a5c9-90a8a000c90a", 00:19:14.844 "is_configured": true, 00:19:14.844 "data_offset": 2048, 00:19:14.844 "data_size": 63488 00:19:14.844 }, 00:19:14.844 { 00:19:14.844 "name": "BaseBdev4", 00:19:14.844 "uuid": "40bbb04e-0bb8-566e-88ad-22c47ead8d69", 00:19:14.844 "is_configured": true, 00:19:14.844 "data_offset": 2048, 00:19:14.844 "data_size": 63488 00:19:14.844 } 00:19:14.844 ] 00:19:14.844 }' 00:19:14.844 06:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:14.845 06:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:14.845 06:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.103 06:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:15.103 06:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:15.103 06:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:19:15.103 06:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:15.103 06:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:15.103 06:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:15.103 06:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:15.103 06:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:15.103 06:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:15.103 06:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.103 06:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.103 [2024-12-06 06:45:33.544516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:15.103 [2024-12-06 06:45:33.544826] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:19:15.103 [2024-12-06 06:45:33.544862] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:15.103 request: 00:19:15.103 { 00:19:15.103 "base_bdev": "BaseBdev1", 00:19:15.103 "raid_bdev": "raid_bdev1", 00:19:15.103 "method": "bdev_raid_add_base_bdev", 00:19:15.103 "req_id": 1 00:19:15.103 } 00:19:15.103 Got JSON-RPC error response 00:19:15.103 response: 00:19:15.103 { 00:19:15.103 "code": -22, 00:19:15.103 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:15.103 } 00:19:15.103 06:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:15.103 06:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:19:15.103 06:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:15.103 06:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:15.103 06:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:15.103 06:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:16.037 06:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:16.037 06:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:16.037 06:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:16.037 06:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:16.037 06:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:16.037 06:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:16.037 06:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.037 06:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.037 06:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.037 06:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.037 06:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.037 06:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.037 06:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.037 06:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.037 06:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.037 06:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.037 "name": "raid_bdev1", 00:19:16.037 "uuid": "dbc1be9f-19cf-4cc3-a57d-a613902ab9cf", 00:19:16.037 "strip_size_kb": 0, 00:19:16.037 "state": "online", 00:19:16.037 "raid_level": "raid1", 00:19:16.037 "superblock": true, 00:19:16.037 "num_base_bdevs": 4, 00:19:16.037 "num_base_bdevs_discovered": 2, 00:19:16.037 "num_base_bdevs_operational": 2, 00:19:16.037 "base_bdevs_list": [ 00:19:16.037 { 00:19:16.037 "name": null, 00:19:16.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.037 "is_configured": false, 00:19:16.037 "data_offset": 0, 00:19:16.037 "data_size": 63488 00:19:16.037 }, 00:19:16.037 { 00:19:16.037 "name": null, 00:19:16.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.037 "is_configured": false, 00:19:16.037 "data_offset": 2048, 00:19:16.037 "data_size": 63488 00:19:16.037 }, 00:19:16.037 { 00:19:16.037 "name": "BaseBdev3", 00:19:16.037 "uuid": "f36b5a7e-2eb6-52ac-a5c9-90a8a000c90a", 00:19:16.037 "is_configured": true, 00:19:16.037 "data_offset": 2048, 00:19:16.037 "data_size": 63488 00:19:16.037 }, 00:19:16.037 { 00:19:16.037 "name": "BaseBdev4", 00:19:16.037 "uuid": "40bbb04e-0bb8-566e-88ad-22c47ead8d69", 00:19:16.037 "is_configured": true, 00:19:16.037 "data_offset": 2048, 00:19:16.037 "data_size": 63488 00:19:16.037 } 00:19:16.037 ] 00:19:16.037 }' 00:19:16.037 06:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.037 06:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.603 06:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:16.603 06:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:16.603 06:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:16.603 06:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:16.603 06:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:16.603 06:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.603 06:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.603 06:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.603 06:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.603 06:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.603 06:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:16.603 "name": "raid_bdev1", 00:19:16.603 "uuid": "dbc1be9f-19cf-4cc3-a57d-a613902ab9cf", 00:19:16.603 "strip_size_kb": 0, 00:19:16.603 "state": "online", 00:19:16.603 "raid_level": "raid1", 00:19:16.603 "superblock": true, 00:19:16.603 "num_base_bdevs": 4, 00:19:16.603 "num_base_bdevs_discovered": 2, 00:19:16.603 "num_base_bdevs_operational": 2, 00:19:16.603 "base_bdevs_list": [ 00:19:16.603 { 00:19:16.603 "name": null, 00:19:16.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.603 "is_configured": false, 00:19:16.603 "data_offset": 0, 00:19:16.603 "data_size": 63488 00:19:16.603 }, 00:19:16.603 { 00:19:16.603 "name": null, 00:19:16.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.603 "is_configured": false, 00:19:16.603 "data_offset": 2048, 00:19:16.603 "data_size": 63488 00:19:16.603 }, 00:19:16.603 { 00:19:16.603 "name": "BaseBdev3", 00:19:16.603 "uuid": "f36b5a7e-2eb6-52ac-a5c9-90a8a000c90a", 00:19:16.603 "is_configured": true, 00:19:16.603 "data_offset": 2048, 00:19:16.603 "data_size": 63488 00:19:16.603 }, 00:19:16.603 { 00:19:16.603 "name": "BaseBdev4", 00:19:16.603 "uuid": "40bbb04e-0bb8-566e-88ad-22c47ead8d69", 00:19:16.603 "is_configured": true, 00:19:16.603 "data_offset": 2048, 00:19:16.603 "data_size": 63488 00:19:16.603 } 00:19:16.603 ] 00:19:16.603 }' 00:19:16.603 06:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:16.603 06:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:16.603 06:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:16.603 06:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:16.603 06:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78441 00:19:16.603 06:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78441 ']' 00:19:16.603 06:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78441 00:19:16.603 06:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:16.603 06:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:16.603 06:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78441 00:19:16.603 06:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:16.603 06:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:16.603 killing process with pid 78441 00:19:16.603 06:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78441' 00:19:16.603 06:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78441 00:19:16.603 Received shutdown signal, test time was about 60.000000 seconds 00:19:16.603 00:19:16.603 Latency(us) 00:19:16.603 [2024-12-06T06:45:35.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.603 [2024-12-06T06:45:35.250Z] =================================================================================================================== 00:19:16.603 [2024-12-06T06:45:35.250Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:16.603 [2024-12-06 06:45:35.239954] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:16.603 06:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78441 00:19:16.603 [2024-12-06 06:45:35.240105] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:16.603 [2024-12-06 06:45:35.240240] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:16.603 [2024-12-06 06:45:35.240266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:17.167 [2024-12-06 06:45:35.685829] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:18.102 06:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:19:18.102 00:19:18.102 real 0m29.471s 00:19:18.102 user 0m35.790s 00:19:18.102 sys 0m4.094s 00:19:18.102 06:45:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:18.102 06:45:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.102 ************************************ 00:19:18.102 END TEST raid_rebuild_test_sb 00:19:18.102 ************************************ 00:19:18.362 06:45:36 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:19:18.362 06:45:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:18.362 06:45:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:18.362 06:45:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:18.362 ************************************ 00:19:18.362 START TEST raid_rebuild_test_io 00:19:18.362 ************************************ 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79235 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79235 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79235 ']' 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.362 06:45:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:18.362 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:18.362 Zero copy mechanism will not be used. 00:19:18.362 [2024-12-06 06:45:36.907238] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:19:18.362 [2024-12-06 06:45:36.907403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79235 ] 00:19:18.663 [2024-12-06 06:45:37.098760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.663 [2024-12-06 06:45:37.256881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.922 [2024-12-06 06:45:37.474592] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:18.922 [2024-12-06 06:45:37.474668] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:19.491 06:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.491 06:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:19:19.491 06:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:19.491 06:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:19.491 06:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.491 06:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:19.491 BaseBdev1_malloc 00:19:19.491 06:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.492 06:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:19.492 06:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.492 06:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:19.492 [2024-12-06 06:45:37.906092] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:19.492 [2024-12-06 06:45:37.906168] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.492 [2024-12-06 06:45:37.906199] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:19.492 [2024-12-06 06:45:37.906217] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.492 [2024-12-06 06:45:37.909054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.492 [2024-12-06 06:45:37.909103] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:19.492 BaseBdev1 00:19:19.492 06:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.492 06:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:19.492 06:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:19.492 06:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.492 06:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:19.492 BaseBdev2_malloc 00:19:19.492 06:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.492 06:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:19.492 06:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.492 06:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:19.492 [2024-12-06 06:45:37.958097] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:19.492 [2024-12-06 06:45:37.958180] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.492 [2024-12-06 06:45:37.958214] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:19.492 [2024-12-06 06:45:37.958231] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.492 [2024-12-06 06:45:37.961064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.492 [2024-12-06 06:45:37.961112] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:19.492 BaseBdev2 00:19:19.492 06:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.492 06:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:19.492 06:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:19.492 06:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.492 06:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:19.492 BaseBdev3_malloc 00:19:19.492 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.492 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:19.492 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.492 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:19.492 [2024-12-06 06:45:38.022924] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:19.492 [2024-12-06 06:45:38.022994] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.492 [2024-12-06 06:45:38.023025] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:19.492 [2024-12-06 06:45:38.023042] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.492 [2024-12-06 06:45:38.025773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.492 [2024-12-06 06:45:38.025822] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:19.492 BaseBdev3 00:19:19.492 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.492 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:19.492 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:19.492 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.492 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:19.492 BaseBdev4_malloc 00:19:19.492 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.492 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:19.492 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.492 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:19.492 [2024-12-06 06:45:38.081220] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:19.492 [2024-12-06 06:45:38.081300] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.492 [2024-12-06 06:45:38.081333] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:19.492 [2024-12-06 06:45:38.081352] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.492 [2024-12-06 06:45:38.084326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.492 [2024-12-06 06:45:38.084396] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:19.492 BaseBdev4 00:19:19.492 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.492 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:19.492 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.492 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:19.492 spare_malloc 00:19:19.492 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.492 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:19.492 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.492 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:19.751 spare_delay 00:19:19.751 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.751 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:19.751 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.751 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:19.751 [2024-12-06 06:45:38.141969] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:19.751 [2024-12-06 06:45:38.142039] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.751 [2024-12-06 06:45:38.142067] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:19.751 [2024-12-06 06:45:38.142085] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.751 [2024-12-06 06:45:38.144976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.751 [2024-12-06 06:45:38.145025] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:19.751 spare 00:19:19.751 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.751 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:19.751 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.751 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:19.751 [2024-12-06 06:45:38.150019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:19.751 [2024-12-06 06:45:38.152498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:19.751 [2024-12-06 06:45:38.152615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:19.751 [2024-12-06 06:45:38.152708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:19.751 [2024-12-06 06:45:38.152818] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:19.751 [2024-12-06 06:45:38.152841] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:19.751 [2024-12-06 06:45:38.153167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:19.751 [2024-12-06 06:45:38.153408] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:19.751 [2024-12-06 06:45:38.153433] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:19.751 [2024-12-06 06:45:38.153640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:19.751 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.751 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:19.751 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:19.751 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:19.751 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:19.751 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:19.751 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:19.751 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.751 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.752 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.752 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.752 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.752 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.752 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.752 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:19.752 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.752 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.752 "name": "raid_bdev1", 00:19:19.752 "uuid": "fd607b6f-5993-4e85-a318-8383d0e1cf3e", 00:19:19.752 "strip_size_kb": 0, 00:19:19.752 "state": "online", 00:19:19.752 "raid_level": "raid1", 00:19:19.752 "superblock": false, 00:19:19.752 "num_base_bdevs": 4, 00:19:19.752 "num_base_bdevs_discovered": 4, 00:19:19.752 "num_base_bdevs_operational": 4, 00:19:19.752 "base_bdevs_list": [ 00:19:19.752 { 00:19:19.752 "name": "BaseBdev1", 00:19:19.752 "uuid": "10e02d92-bed7-50f5-86e5-74eed0229254", 00:19:19.752 "is_configured": true, 00:19:19.752 "data_offset": 0, 00:19:19.752 "data_size": 65536 00:19:19.752 }, 00:19:19.752 { 00:19:19.752 "name": "BaseBdev2", 00:19:19.752 "uuid": "e65ab4e5-6bb4-58fb-9406-6723785ebec0", 00:19:19.752 "is_configured": true, 00:19:19.752 "data_offset": 0, 00:19:19.752 "data_size": 65536 00:19:19.752 }, 00:19:19.752 { 00:19:19.752 "name": "BaseBdev3", 00:19:19.752 "uuid": "8a851cb7-c073-52ab-90de-9f4a71fdca34", 00:19:19.752 "is_configured": true, 00:19:19.752 "data_offset": 0, 00:19:19.752 "data_size": 65536 00:19:19.752 }, 00:19:19.752 { 00:19:19.752 "name": "BaseBdev4", 00:19:19.752 "uuid": "0378be54-540a-5015-8c28-edbbeddd0c0a", 00:19:19.752 "is_configured": true, 00:19:19.752 "data_offset": 0, 00:19:19.752 "data_size": 65536 00:19:19.752 } 00:19:19.752 ] 00:19:19.752 }' 00:19:19.752 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.752 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:20.318 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:20.318 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:20.318 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.318 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:20.318 [2024-12-06 06:45:38.682765] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:20.318 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.318 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:19:20.318 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.318 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:20.318 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.318 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:20.318 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.318 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:19:20.318 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:19:20.318 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:20.318 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:20.318 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.318 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:20.318 [2024-12-06 06:45:38.790272] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:20.318 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.318 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:20.318 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.318 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.318 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:20.319 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:20.319 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:20.319 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.319 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.319 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.319 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.319 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.319 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.319 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.319 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:20.319 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.319 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.319 "name": "raid_bdev1", 00:19:20.319 "uuid": "fd607b6f-5993-4e85-a318-8383d0e1cf3e", 00:19:20.319 "strip_size_kb": 0, 00:19:20.319 "state": "online", 00:19:20.319 "raid_level": "raid1", 00:19:20.319 "superblock": false, 00:19:20.319 "num_base_bdevs": 4, 00:19:20.319 "num_base_bdevs_discovered": 3, 00:19:20.319 "num_base_bdevs_operational": 3, 00:19:20.319 "base_bdevs_list": [ 00:19:20.319 { 00:19:20.319 "name": null, 00:19:20.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.319 "is_configured": false, 00:19:20.319 "data_offset": 0, 00:19:20.319 "data_size": 65536 00:19:20.319 }, 00:19:20.319 { 00:19:20.319 "name": "BaseBdev2", 00:19:20.319 "uuid": "e65ab4e5-6bb4-58fb-9406-6723785ebec0", 00:19:20.319 "is_configured": true, 00:19:20.319 "data_offset": 0, 00:19:20.319 "data_size": 65536 00:19:20.319 }, 00:19:20.319 { 00:19:20.319 "name": "BaseBdev3", 00:19:20.319 "uuid": "8a851cb7-c073-52ab-90de-9f4a71fdca34", 00:19:20.319 "is_configured": true, 00:19:20.319 "data_offset": 0, 00:19:20.319 "data_size": 65536 00:19:20.319 }, 00:19:20.319 { 00:19:20.319 "name": "BaseBdev4", 00:19:20.319 "uuid": "0378be54-540a-5015-8c28-edbbeddd0c0a", 00:19:20.319 "is_configured": true, 00:19:20.319 "data_offset": 0, 00:19:20.319 "data_size": 65536 00:19:20.319 } 00:19:20.319 ] 00:19:20.319 }' 00:19:20.319 06:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.319 06:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:20.319 [2024-12-06 06:45:38.920444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:20.319 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:20.319 Zero copy mechanism will not be used. 00:19:20.319 Running I/O for 60 seconds... 00:19:20.886 06:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:20.886 06:45:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.886 06:45:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:20.886 [2024-12-06 06:45:39.330778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:20.886 06:45:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.886 06:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:20.886 [2024-12-06 06:45:39.382574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:19:20.886 [2024-12-06 06:45:39.385224] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:20.886 [2024-12-06 06:45:39.506290] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:20.886 [2024-12-06 06:45:39.507016] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:21.145 [2024-12-06 06:45:39.758779] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:21.662 138.00 IOPS, 414.00 MiB/s [2024-12-06T06:45:40.309Z] [2024-12-06 06:45:40.129537] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:21.923 06:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:21.923 06:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:21.923 06:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:21.923 06:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:21.923 06:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:21.923 06:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.923 06:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.923 06:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:21.923 06:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.923 06:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.923 [2024-12-06 06:45:40.399842] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:21.923 06:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:21.923 "name": "raid_bdev1", 00:19:21.923 "uuid": "fd607b6f-5993-4e85-a318-8383d0e1cf3e", 00:19:21.923 "strip_size_kb": 0, 00:19:21.923 "state": "online", 00:19:21.923 "raid_level": "raid1", 00:19:21.923 "superblock": false, 00:19:21.923 "num_base_bdevs": 4, 00:19:21.923 "num_base_bdevs_discovered": 4, 00:19:21.923 "num_base_bdevs_operational": 4, 00:19:21.923 "process": { 00:19:21.923 "type": "rebuild", 00:19:21.923 "target": "spare", 00:19:21.923 "progress": { 00:19:21.923 "blocks": 8192, 00:19:21.923 "percent": 12 00:19:21.923 } 00:19:21.923 }, 00:19:21.923 "base_bdevs_list": [ 00:19:21.923 { 00:19:21.923 "name": "spare", 00:19:21.923 "uuid": "1d40114f-85f9-5014-9655-ea1a7b5581d7", 00:19:21.923 "is_configured": true, 00:19:21.923 "data_offset": 0, 00:19:21.923 "data_size": 65536 00:19:21.923 }, 00:19:21.923 { 00:19:21.923 "name": "BaseBdev2", 00:19:21.923 "uuid": "e65ab4e5-6bb4-58fb-9406-6723785ebec0", 00:19:21.923 "is_configured": true, 00:19:21.923 "data_offset": 0, 00:19:21.923 "data_size": 65536 00:19:21.923 }, 00:19:21.923 { 00:19:21.923 "name": "BaseBdev3", 00:19:21.923 "uuid": "8a851cb7-c073-52ab-90de-9f4a71fdca34", 00:19:21.923 "is_configured": true, 00:19:21.923 "data_offset": 0, 00:19:21.923 "data_size": 65536 00:19:21.923 }, 00:19:21.923 { 00:19:21.923 "name": "BaseBdev4", 00:19:21.923 "uuid": "0378be54-540a-5015-8c28-edbbeddd0c0a", 00:19:21.923 "is_configured": true, 00:19:21.923 "data_offset": 0, 00:19:21.923 "data_size": 65536 00:19:21.923 } 00:19:21.923 ] 00:19:21.923 }' 00:19:21.923 06:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:21.923 06:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:21.923 06:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:21.923 06:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:21.923 06:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:21.923 06:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.923 06:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:21.923 [2024-12-06 06:45:40.529896] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:22.182 [2024-12-06 06:45:40.730572] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:22.182 [2024-12-06 06:45:40.745260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.182 [2024-12-06 06:45:40.745325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:22.182 [2024-12-06 06:45:40.745342] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:22.182 [2024-12-06 06:45:40.779207] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:19:22.183 06:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.183 06:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:22.183 06:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.183 06:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.183 06:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:22.183 06:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:22.183 06:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:22.183 06:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.183 06:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.183 06:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.183 06:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.183 06:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.183 06:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.183 06:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.183 06:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:22.441 06:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.441 06:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.441 "name": "raid_bdev1", 00:19:22.441 "uuid": "fd607b6f-5993-4e85-a318-8383d0e1cf3e", 00:19:22.441 "strip_size_kb": 0, 00:19:22.441 "state": "online", 00:19:22.441 "raid_level": "raid1", 00:19:22.441 "superblock": false, 00:19:22.441 "num_base_bdevs": 4, 00:19:22.441 "num_base_bdevs_discovered": 3, 00:19:22.441 "num_base_bdevs_operational": 3, 00:19:22.441 "base_bdevs_list": [ 00:19:22.441 { 00:19:22.441 "name": null, 00:19:22.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.441 "is_configured": false, 00:19:22.441 "data_offset": 0, 00:19:22.441 "data_size": 65536 00:19:22.441 }, 00:19:22.441 { 00:19:22.441 "name": "BaseBdev2", 00:19:22.441 "uuid": "e65ab4e5-6bb4-58fb-9406-6723785ebec0", 00:19:22.441 "is_configured": true, 00:19:22.441 "data_offset": 0, 00:19:22.441 "data_size": 65536 00:19:22.441 }, 00:19:22.441 { 00:19:22.441 "name": "BaseBdev3", 00:19:22.441 "uuid": "8a851cb7-c073-52ab-90de-9f4a71fdca34", 00:19:22.441 "is_configured": true, 00:19:22.441 "data_offset": 0, 00:19:22.441 "data_size": 65536 00:19:22.441 }, 00:19:22.441 { 00:19:22.441 "name": "BaseBdev4", 00:19:22.441 "uuid": "0378be54-540a-5015-8c28-edbbeddd0c0a", 00:19:22.441 "is_configured": true, 00:19:22.441 "data_offset": 0, 00:19:22.441 "data_size": 65536 00:19:22.441 } 00:19:22.441 ] 00:19:22.441 }' 00:19:22.441 06:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.441 06:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:23.008 113.50 IOPS, 340.50 MiB/s [2024-12-06T06:45:41.655Z] 06:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:23.008 06:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:23.008 06:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:23.008 06:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:23.008 06:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:23.008 06:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.008 06:45:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.008 06:45:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:23.008 06:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.008 06:45:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.008 06:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:23.008 "name": "raid_bdev1", 00:19:23.008 "uuid": "fd607b6f-5993-4e85-a318-8383d0e1cf3e", 00:19:23.008 "strip_size_kb": 0, 00:19:23.008 "state": "online", 00:19:23.008 "raid_level": "raid1", 00:19:23.008 "superblock": false, 00:19:23.008 "num_base_bdevs": 4, 00:19:23.008 "num_base_bdevs_discovered": 3, 00:19:23.008 "num_base_bdevs_operational": 3, 00:19:23.008 "base_bdevs_list": [ 00:19:23.008 { 00:19:23.008 "name": null, 00:19:23.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.008 "is_configured": false, 00:19:23.008 "data_offset": 0, 00:19:23.008 "data_size": 65536 00:19:23.008 }, 00:19:23.008 { 00:19:23.008 "name": "BaseBdev2", 00:19:23.008 "uuid": "e65ab4e5-6bb4-58fb-9406-6723785ebec0", 00:19:23.008 "is_configured": true, 00:19:23.008 "data_offset": 0, 00:19:23.008 "data_size": 65536 00:19:23.008 }, 00:19:23.008 { 00:19:23.008 "name": "BaseBdev3", 00:19:23.008 "uuid": "8a851cb7-c073-52ab-90de-9f4a71fdca34", 00:19:23.008 "is_configured": true, 00:19:23.008 "data_offset": 0, 00:19:23.008 "data_size": 65536 00:19:23.008 }, 00:19:23.008 { 00:19:23.008 "name": "BaseBdev4", 00:19:23.008 "uuid": "0378be54-540a-5015-8c28-edbbeddd0c0a", 00:19:23.008 "is_configured": true, 00:19:23.008 "data_offset": 0, 00:19:23.008 "data_size": 65536 00:19:23.008 } 00:19:23.008 ] 00:19:23.008 }' 00:19:23.008 06:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:23.008 06:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:23.008 06:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:23.008 06:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:23.008 06:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:23.008 06:45:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.008 06:45:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:23.008 [2024-12-06 06:45:41.540465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:23.008 06:45:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.008 06:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:23.008 [2024-12-06 06:45:41.626447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:23.008 [2024-12-06 06:45:41.629080] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:23.269 [2024-12-06 06:45:41.759901] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:23.269 [2024-12-06 06:45:41.761665] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:23.529 123.00 IOPS, 369.00 MiB/s [2024-12-06T06:45:42.176Z] [2024-12-06 06:45:41.987909] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:23.529 [2024-12-06 06:45:41.988301] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:24.097 [2024-12-06 06:45:42.535604] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:24.097 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:24.097 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.097 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:24.097 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:24.097 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.097 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.097 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.097 06:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.097 06:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:24.097 06:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.097 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.097 "name": "raid_bdev1", 00:19:24.097 "uuid": "fd607b6f-5993-4e85-a318-8383d0e1cf3e", 00:19:24.097 "strip_size_kb": 0, 00:19:24.097 "state": "online", 00:19:24.097 "raid_level": "raid1", 00:19:24.097 "superblock": false, 00:19:24.097 "num_base_bdevs": 4, 00:19:24.097 "num_base_bdevs_discovered": 4, 00:19:24.097 "num_base_bdevs_operational": 4, 00:19:24.097 "process": { 00:19:24.097 "type": "rebuild", 00:19:24.097 "target": "spare", 00:19:24.097 "progress": { 00:19:24.097 "blocks": 10240, 00:19:24.097 "percent": 15 00:19:24.097 } 00:19:24.097 }, 00:19:24.097 "base_bdevs_list": [ 00:19:24.097 { 00:19:24.097 "name": "spare", 00:19:24.097 "uuid": "1d40114f-85f9-5014-9655-ea1a7b5581d7", 00:19:24.097 "is_configured": true, 00:19:24.097 "data_offset": 0, 00:19:24.097 "data_size": 65536 00:19:24.097 }, 00:19:24.097 { 00:19:24.097 "name": "BaseBdev2", 00:19:24.097 "uuid": "e65ab4e5-6bb4-58fb-9406-6723785ebec0", 00:19:24.097 "is_configured": true, 00:19:24.097 "data_offset": 0, 00:19:24.097 "data_size": 65536 00:19:24.097 }, 00:19:24.097 { 00:19:24.097 "name": "BaseBdev3", 00:19:24.097 "uuid": "8a851cb7-c073-52ab-90de-9f4a71fdca34", 00:19:24.097 "is_configured": true, 00:19:24.097 "data_offset": 0, 00:19:24.097 "data_size": 65536 00:19:24.097 }, 00:19:24.097 { 00:19:24.097 "name": "BaseBdev4", 00:19:24.097 "uuid": "0378be54-540a-5015-8c28-edbbeddd0c0a", 00:19:24.097 "is_configured": true, 00:19:24.097 "data_offset": 0, 00:19:24.097 "data_size": 65536 00:19:24.097 } 00:19:24.097 ] 00:19:24.097 }' 00:19:24.097 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.097 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:24.097 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.357 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:24.357 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:19:24.357 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:24.357 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:24.357 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:19:24.357 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:24.357 06:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.357 06:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:24.357 [2024-12-06 06:45:42.779835] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:24.357 [2024-12-06 06:45:42.809328] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:19:24.357 [2024-12-06 06:45:42.809505] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:19:24.357 06:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.357 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:19:24.357 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:19:24.358 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:24.358 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.358 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:24.358 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:24.358 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.358 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.358 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.358 06:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.358 06:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:24.358 06:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.358 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.358 "name": "raid_bdev1", 00:19:24.358 "uuid": "fd607b6f-5993-4e85-a318-8383d0e1cf3e", 00:19:24.358 "strip_size_kb": 0, 00:19:24.358 "state": "online", 00:19:24.358 "raid_level": "raid1", 00:19:24.358 "superblock": false, 00:19:24.358 "num_base_bdevs": 4, 00:19:24.358 "num_base_bdevs_discovered": 3, 00:19:24.358 "num_base_bdevs_operational": 3, 00:19:24.358 "process": { 00:19:24.358 "type": "rebuild", 00:19:24.358 "target": "spare", 00:19:24.358 "progress": { 00:19:24.358 "blocks": 12288, 00:19:24.358 "percent": 18 00:19:24.358 } 00:19:24.358 }, 00:19:24.358 "base_bdevs_list": [ 00:19:24.358 { 00:19:24.358 "name": "spare", 00:19:24.358 "uuid": "1d40114f-85f9-5014-9655-ea1a7b5581d7", 00:19:24.358 "is_configured": true, 00:19:24.358 "data_offset": 0, 00:19:24.358 "data_size": 65536 00:19:24.358 }, 00:19:24.358 { 00:19:24.358 "name": null, 00:19:24.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.358 "is_configured": false, 00:19:24.358 "data_offset": 0, 00:19:24.358 "data_size": 65536 00:19:24.358 }, 00:19:24.358 { 00:19:24.358 "name": "BaseBdev3", 00:19:24.358 "uuid": "8a851cb7-c073-52ab-90de-9f4a71fdca34", 00:19:24.358 "is_configured": true, 00:19:24.358 "data_offset": 0, 00:19:24.358 "data_size": 65536 00:19:24.358 }, 00:19:24.358 { 00:19:24.358 "name": "BaseBdev4", 00:19:24.358 "uuid": "0378be54-540a-5015-8c28-edbbeddd0c0a", 00:19:24.358 "is_configured": true, 00:19:24.358 "data_offset": 0, 00:19:24.358 "data_size": 65536 00:19:24.358 } 00:19:24.358 ] 00:19:24.358 }' 00:19:24.358 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.358 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:24.358 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.358 113.50 IOPS, 340.50 MiB/s [2024-12-06T06:45:43.005Z] [2024-12-06 06:45:42.974644] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:24.358 [2024-12-06 06:45:42.976036] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:24.358 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:24.358 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=522 00:19:24.358 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:24.358 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:24.358 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.358 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:24.358 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:24.358 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.358 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.358 06:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.358 06:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:24.358 06:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.618 06:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.618 06:45:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.618 "name": "raid_bdev1", 00:19:24.618 "uuid": "fd607b6f-5993-4e85-a318-8383d0e1cf3e", 00:19:24.618 "strip_size_kb": 0, 00:19:24.618 "state": "online", 00:19:24.618 "raid_level": "raid1", 00:19:24.618 "superblock": false, 00:19:24.618 "num_base_bdevs": 4, 00:19:24.618 "num_base_bdevs_discovered": 3, 00:19:24.618 "num_base_bdevs_operational": 3, 00:19:24.618 "process": { 00:19:24.618 "type": "rebuild", 00:19:24.618 "target": "spare", 00:19:24.618 "progress": { 00:19:24.618 "blocks": 14336, 00:19:24.618 "percent": 21 00:19:24.618 } 00:19:24.618 }, 00:19:24.618 "base_bdevs_list": [ 00:19:24.618 { 00:19:24.618 "name": "spare", 00:19:24.618 "uuid": "1d40114f-85f9-5014-9655-ea1a7b5581d7", 00:19:24.618 "is_configured": true, 00:19:24.618 "data_offset": 0, 00:19:24.618 "data_size": 65536 00:19:24.618 }, 00:19:24.618 { 00:19:24.618 "name": null, 00:19:24.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.618 "is_configured": false, 00:19:24.618 "data_offset": 0, 00:19:24.618 "data_size": 65536 00:19:24.618 }, 00:19:24.618 { 00:19:24.618 "name": "BaseBdev3", 00:19:24.618 "uuid": "8a851cb7-c073-52ab-90de-9f4a71fdca34", 00:19:24.618 "is_configured": true, 00:19:24.618 "data_offset": 0, 00:19:24.618 "data_size": 65536 00:19:24.618 }, 00:19:24.618 { 00:19:24.618 "name": "BaseBdev4", 00:19:24.618 "uuid": "0378be54-540a-5015-8c28-edbbeddd0c0a", 00:19:24.618 "is_configured": true, 00:19:24.618 "data_offset": 0, 00:19:24.618 "data_size": 65536 00:19:24.618 } 00:19:24.618 ] 00:19:24.618 }' 00:19:24.618 06:45:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.618 06:45:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:24.618 06:45:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.618 06:45:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:24.618 06:45:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:24.618 [2024-12-06 06:45:43.197832] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:24.619 [2024-12-06 06:45:43.198188] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:25.186 [2024-12-06 06:45:43.541234] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:25.186 [2024-12-06 06:45:43.542472] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:25.186 [2024-12-06 06:45:43.752606] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:25.445 100.60 IOPS, 301.80 MiB/s [2024-12-06T06:45:44.092Z] [2024-12-06 06:45:44.058476] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:19:25.445 [2024-12-06 06:45:44.059540] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:19:25.705 06:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:25.705 06:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:25.705 06:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:25.705 06:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:25.705 06:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:25.705 06:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:25.705 06:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.705 06:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.705 06:45:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.705 06:45:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:25.705 06:45:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.705 06:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:25.705 "name": "raid_bdev1", 00:19:25.705 "uuid": "fd607b6f-5993-4e85-a318-8383d0e1cf3e", 00:19:25.705 "strip_size_kb": 0, 00:19:25.705 "state": "online", 00:19:25.705 "raid_level": "raid1", 00:19:25.705 "superblock": false, 00:19:25.705 "num_base_bdevs": 4, 00:19:25.705 "num_base_bdevs_discovered": 3, 00:19:25.705 "num_base_bdevs_operational": 3, 00:19:25.705 "process": { 00:19:25.705 "type": "rebuild", 00:19:25.705 "target": "spare", 00:19:25.705 "progress": { 00:19:25.705 "blocks": 26624, 00:19:25.705 "percent": 40 00:19:25.705 } 00:19:25.705 }, 00:19:25.705 "base_bdevs_list": [ 00:19:25.705 { 00:19:25.705 "name": "spare", 00:19:25.705 "uuid": "1d40114f-85f9-5014-9655-ea1a7b5581d7", 00:19:25.705 "is_configured": true, 00:19:25.705 "data_offset": 0, 00:19:25.705 "data_size": 65536 00:19:25.705 }, 00:19:25.705 { 00:19:25.705 "name": null, 00:19:25.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.705 "is_configured": false, 00:19:25.705 "data_offset": 0, 00:19:25.705 "data_size": 65536 00:19:25.705 }, 00:19:25.705 { 00:19:25.705 "name": "BaseBdev3", 00:19:25.705 "uuid": "8a851cb7-c073-52ab-90de-9f4a71fdca34", 00:19:25.705 "is_configured": true, 00:19:25.705 "data_offset": 0, 00:19:25.705 "data_size": 65536 00:19:25.705 }, 00:19:25.705 { 00:19:25.705 "name": "BaseBdev4", 00:19:25.705 "uuid": "0378be54-540a-5015-8c28-edbbeddd0c0a", 00:19:25.705 "is_configured": true, 00:19:25.705 "data_offset": 0, 00:19:25.705 "data_size": 65536 00:19:25.705 } 00:19:25.705 ] 00:19:25.705 }' 00:19:25.705 06:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:25.705 06:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:25.705 06:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:25.705 06:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:25.705 06:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:25.965 [2024-12-06 06:45:44.403436] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:19:26.225 [2024-12-06 06:45:44.669089] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:19:26.742 92.17 IOPS, 276.50 MiB/s [2024-12-06T06:45:45.389Z] 06:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:26.742 06:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:26.742 06:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:26.742 06:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:26.742 06:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:26.742 06:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:26.742 06:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.742 06:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.742 06:45:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.742 06:45:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:26.742 06:45:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.742 [2024-12-06 06:45:45.372636] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:19:27.000 06:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:27.000 "name": "raid_bdev1", 00:19:27.000 "uuid": "fd607b6f-5993-4e85-a318-8383d0e1cf3e", 00:19:27.000 "strip_size_kb": 0, 00:19:27.000 "state": "online", 00:19:27.000 "raid_level": "raid1", 00:19:27.000 "superblock": false, 00:19:27.000 "num_base_bdevs": 4, 00:19:27.000 "num_base_bdevs_discovered": 3, 00:19:27.000 "num_base_bdevs_operational": 3, 00:19:27.000 "process": { 00:19:27.000 "type": "rebuild", 00:19:27.000 "target": "spare", 00:19:27.000 "progress": { 00:19:27.000 "blocks": 43008, 00:19:27.000 "percent": 65 00:19:27.000 } 00:19:27.000 }, 00:19:27.000 "base_bdevs_list": [ 00:19:27.000 { 00:19:27.000 "name": "spare", 00:19:27.000 "uuid": "1d40114f-85f9-5014-9655-ea1a7b5581d7", 00:19:27.000 "is_configured": true, 00:19:27.000 "data_offset": 0, 00:19:27.000 "data_size": 65536 00:19:27.000 }, 00:19:27.000 { 00:19:27.000 "name": null, 00:19:27.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.000 "is_configured": false, 00:19:27.000 "data_offset": 0, 00:19:27.000 "data_size": 65536 00:19:27.000 }, 00:19:27.000 { 00:19:27.000 "name": "BaseBdev3", 00:19:27.000 "uuid": "8a851cb7-c073-52ab-90de-9f4a71fdca34", 00:19:27.000 "is_configured": true, 00:19:27.000 "data_offset": 0, 00:19:27.000 "data_size": 65536 00:19:27.000 }, 00:19:27.000 { 00:19:27.000 "name": "BaseBdev4", 00:19:27.000 "uuid": "0378be54-540a-5015-8c28-edbbeddd0c0a", 00:19:27.000 "is_configured": true, 00:19:27.000 "data_offset": 0, 00:19:27.000 "data_size": 65536 00:19:27.000 } 00:19:27.000 ] 00:19:27.000 }' 00:19:27.000 06:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:27.000 06:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:27.000 06:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:27.000 [2024-12-06 06:45:45.482370] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:19:27.000 06:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:27.000 06:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:27.260 [2024-12-06 06:45:45.835394] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:19:27.777 85.71 IOPS, 257.14 MiB/s [2024-12-06T06:45:46.424Z] [2024-12-06 06:45:46.169197] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:19:27.777 [2024-12-06 06:45:46.177571] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:19:28.037 06:45:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:28.037 06:45:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:28.037 06:45:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:28.037 06:45:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:28.037 06:45:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:28.037 06:45:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:28.037 06:45:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.037 06:45:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.037 06:45:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.037 06:45:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:28.037 06:45:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.037 06:45:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:28.037 "name": "raid_bdev1", 00:19:28.037 "uuid": "fd607b6f-5993-4e85-a318-8383d0e1cf3e", 00:19:28.037 "strip_size_kb": 0, 00:19:28.037 "state": "online", 00:19:28.037 "raid_level": "raid1", 00:19:28.037 "superblock": false, 00:19:28.037 "num_base_bdevs": 4, 00:19:28.037 "num_base_bdevs_discovered": 3, 00:19:28.037 "num_base_bdevs_operational": 3, 00:19:28.037 "process": { 00:19:28.037 "type": "rebuild", 00:19:28.037 "target": "spare", 00:19:28.037 "progress": { 00:19:28.037 "blocks": 61440, 00:19:28.037 "percent": 93 00:19:28.037 } 00:19:28.037 }, 00:19:28.037 "base_bdevs_list": [ 00:19:28.037 { 00:19:28.037 "name": "spare", 00:19:28.037 "uuid": "1d40114f-85f9-5014-9655-ea1a7b5581d7", 00:19:28.037 "is_configured": true, 00:19:28.037 "data_offset": 0, 00:19:28.037 "data_size": 65536 00:19:28.037 }, 00:19:28.037 { 00:19:28.037 "name": null, 00:19:28.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.037 "is_configured": false, 00:19:28.037 "data_offset": 0, 00:19:28.037 "data_size": 65536 00:19:28.037 }, 00:19:28.037 { 00:19:28.037 "name": "BaseBdev3", 00:19:28.037 "uuid": "8a851cb7-c073-52ab-90de-9f4a71fdca34", 00:19:28.037 "is_configured": true, 00:19:28.037 "data_offset": 0, 00:19:28.037 "data_size": 65536 00:19:28.037 }, 00:19:28.037 { 00:19:28.037 "name": "BaseBdev4", 00:19:28.037 "uuid": "0378be54-540a-5015-8c28-edbbeddd0c0a", 00:19:28.037 "is_configured": true, 00:19:28.037 "data_offset": 0, 00:19:28.037 "data_size": 65536 00:19:28.037 } 00:19:28.037 ] 00:19:28.037 }' 00:19:28.037 06:45:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:28.037 06:45:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:28.037 06:45:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:28.037 [2024-12-06 06:45:46.623467] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:28.037 06:45:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:28.037 06:45:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:28.296 [2024-12-06 06:45:46.721651] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:28.296 [2024-12-06 06:45:46.724957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.122 79.25 IOPS, 237.75 MiB/s [2024-12-06T06:45:47.769Z] 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:29.122 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:29.122 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:29.122 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:29.122 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:29.123 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:29.123 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.123 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.123 06:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.123 06:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:29.123 06:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.123 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:29.123 "name": "raid_bdev1", 00:19:29.123 "uuid": "fd607b6f-5993-4e85-a318-8383d0e1cf3e", 00:19:29.123 "strip_size_kb": 0, 00:19:29.123 "state": "online", 00:19:29.123 "raid_level": "raid1", 00:19:29.123 "superblock": false, 00:19:29.123 "num_base_bdevs": 4, 00:19:29.123 "num_base_bdevs_discovered": 3, 00:19:29.123 "num_base_bdevs_operational": 3, 00:19:29.123 "base_bdevs_list": [ 00:19:29.123 { 00:19:29.123 "name": "spare", 00:19:29.123 "uuid": "1d40114f-85f9-5014-9655-ea1a7b5581d7", 00:19:29.123 "is_configured": true, 00:19:29.123 "data_offset": 0, 00:19:29.123 "data_size": 65536 00:19:29.123 }, 00:19:29.123 { 00:19:29.123 "name": null, 00:19:29.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.123 "is_configured": false, 00:19:29.123 "data_offset": 0, 00:19:29.123 "data_size": 65536 00:19:29.123 }, 00:19:29.123 { 00:19:29.123 "name": "BaseBdev3", 00:19:29.123 "uuid": "8a851cb7-c073-52ab-90de-9f4a71fdca34", 00:19:29.123 "is_configured": true, 00:19:29.123 "data_offset": 0, 00:19:29.123 "data_size": 65536 00:19:29.123 }, 00:19:29.123 { 00:19:29.123 "name": "BaseBdev4", 00:19:29.123 "uuid": "0378be54-540a-5015-8c28-edbbeddd0c0a", 00:19:29.123 "is_configured": true, 00:19:29.123 "data_offset": 0, 00:19:29.123 "data_size": 65536 00:19:29.123 } 00:19:29.123 ] 00:19:29.123 }' 00:19:29.123 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:29.382 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:29.382 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:29.382 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:29.382 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:19:29.382 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:29.382 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:29.382 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:29.382 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:29.382 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:29.382 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.382 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.382 06:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.382 06:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:29.382 06:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.382 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:29.382 "name": "raid_bdev1", 00:19:29.382 "uuid": "fd607b6f-5993-4e85-a318-8383d0e1cf3e", 00:19:29.382 "strip_size_kb": 0, 00:19:29.382 "state": "online", 00:19:29.382 "raid_level": "raid1", 00:19:29.382 "superblock": false, 00:19:29.382 "num_base_bdevs": 4, 00:19:29.382 "num_base_bdevs_discovered": 3, 00:19:29.382 "num_base_bdevs_operational": 3, 00:19:29.382 "base_bdevs_list": [ 00:19:29.382 { 00:19:29.382 "name": "spare", 00:19:29.382 "uuid": "1d40114f-85f9-5014-9655-ea1a7b5581d7", 00:19:29.382 "is_configured": true, 00:19:29.382 "data_offset": 0, 00:19:29.382 "data_size": 65536 00:19:29.382 }, 00:19:29.382 { 00:19:29.382 "name": null, 00:19:29.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.382 "is_configured": false, 00:19:29.382 "data_offset": 0, 00:19:29.382 "data_size": 65536 00:19:29.382 }, 00:19:29.382 { 00:19:29.382 "name": "BaseBdev3", 00:19:29.382 "uuid": "8a851cb7-c073-52ab-90de-9f4a71fdca34", 00:19:29.382 "is_configured": true, 00:19:29.382 "data_offset": 0, 00:19:29.382 "data_size": 65536 00:19:29.382 }, 00:19:29.382 { 00:19:29.382 "name": "BaseBdev4", 00:19:29.382 "uuid": "0378be54-540a-5015-8c28-edbbeddd0c0a", 00:19:29.382 "is_configured": true, 00:19:29.382 "data_offset": 0, 00:19:29.382 "data_size": 65536 00:19:29.382 } 00:19:29.382 ] 00:19:29.382 }' 00:19:29.382 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:29.382 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:29.382 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:29.382 74.33 IOPS, 223.00 MiB/s [2024-12-06T06:45:48.029Z] 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:29.382 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:29.382 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:29.382 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:29.382 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:29.382 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:29.382 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:29.382 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.382 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.382 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.382 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.382 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.382 06:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.382 06:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.382 06:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:29.382 06:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.640 06:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.640 "name": "raid_bdev1", 00:19:29.640 "uuid": "fd607b6f-5993-4e85-a318-8383d0e1cf3e", 00:19:29.640 "strip_size_kb": 0, 00:19:29.640 "state": "online", 00:19:29.640 "raid_level": "raid1", 00:19:29.640 "superblock": false, 00:19:29.640 "num_base_bdevs": 4, 00:19:29.640 "num_base_bdevs_discovered": 3, 00:19:29.640 "num_base_bdevs_operational": 3, 00:19:29.640 "base_bdevs_list": [ 00:19:29.640 { 00:19:29.640 "name": "spare", 00:19:29.640 "uuid": "1d40114f-85f9-5014-9655-ea1a7b5581d7", 00:19:29.640 "is_configured": true, 00:19:29.640 "data_offset": 0, 00:19:29.640 "data_size": 65536 00:19:29.640 }, 00:19:29.640 { 00:19:29.640 "name": null, 00:19:29.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.640 "is_configured": false, 00:19:29.640 "data_offset": 0, 00:19:29.640 "data_size": 65536 00:19:29.640 }, 00:19:29.640 { 00:19:29.640 "name": "BaseBdev3", 00:19:29.640 "uuid": "8a851cb7-c073-52ab-90de-9f4a71fdca34", 00:19:29.640 "is_configured": true, 00:19:29.640 "data_offset": 0, 00:19:29.640 "data_size": 65536 00:19:29.640 }, 00:19:29.640 { 00:19:29.640 "name": "BaseBdev4", 00:19:29.640 "uuid": "0378be54-540a-5015-8c28-edbbeddd0c0a", 00:19:29.640 "is_configured": true, 00:19:29.640 "data_offset": 0, 00:19:29.640 "data_size": 65536 00:19:29.640 } 00:19:29.640 ] 00:19:29.640 }' 00:19:29.640 06:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.640 06:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:29.898 06:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:29.898 06:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.898 06:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:29.898 [2024-12-06 06:45:48.518033] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:29.898 [2024-12-06 06:45:48.518077] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:30.192 00:19:30.192 Latency(us) 00:19:30.192 [2024-12-06T06:45:48.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.192 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:30.192 raid_bdev1 : 9.69 71.72 215.16 0.00 0.00 18773.18 275.55 122969.37 00:19:30.192 [2024-12-06T06:45:48.839Z] =================================================================================================================== 00:19:30.192 [2024-12-06T06:45:48.839Z] Total : 71.72 215.16 0.00 0.00 18773.18 275.55 122969.37 00:19:30.192 [2024-12-06 06:45:48.633772] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:30.192 [2024-12-06 06:45:48.633877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:30.192 [2024-12-06 06:45:48.634022] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:30.192 [2024-12-06 06:45:48.634043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:30.192 { 00:19:30.192 "results": [ 00:19:30.192 { 00:19:30.192 "job": "raid_bdev1", 00:19:30.192 "core_mask": "0x1", 00:19:30.192 "workload": "randrw", 00:19:30.192 "percentage": 50, 00:19:30.192 "status": "finished", 00:19:30.192 "queue_depth": 2, 00:19:30.192 "io_size": 3145728, 00:19:30.192 "runtime": 9.690578, 00:19:30.192 "iops": 71.71914822831002, 00:19:30.192 "mibps": 215.15744468493006, 00:19:30.192 "io_failed": 0, 00:19:30.192 "io_timeout": 0, 00:19:30.192 "avg_latency_us": 18773.184209287116, 00:19:30.192 "min_latency_us": 275.5490909090909, 00:19:30.192 "max_latency_us": 122969.36727272728 00:19:30.192 } 00:19:30.192 ], 00:19:30.192 "core_count": 1 00:19:30.192 } 00:19:30.192 06:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.192 06:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.192 06:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.192 06:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:19:30.192 06:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:30.192 06:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.192 06:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:30.192 06:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:30.192 06:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:19:30.192 06:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:19:30.192 06:45:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:30.192 06:45:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:19:30.192 06:45:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:30.192 06:45:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:30.192 06:45:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:30.192 06:45:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:30.192 06:45:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:30.192 06:45:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:30.192 06:45:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:19:30.450 /dev/nbd0 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:30.450 1+0 records in 00:19:30.450 1+0 records out 00:19:30.450 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000510817 s, 8.0 MB/s 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:30.450 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:19:30.708 /dev/nbd1 00:19:30.708 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:30.966 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:30.966 06:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:30.966 06:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:19:30.966 06:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:30.966 06:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:30.966 06:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:30.966 06:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:19:30.966 06:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:30.966 06:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:30.966 06:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:30.966 1+0 records in 00:19:30.966 1+0 records out 00:19:30.966 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378651 s, 10.8 MB/s 00:19:30.966 06:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.966 06:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:19:30.966 06:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.966 06:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:30.966 06:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:19:30.966 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:30.966 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:30.966 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:30.966 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:19:30.966 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:30.966 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:30.966 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:30.966 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:30.966 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:30.966 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:31.224 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:31.224 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:31.224 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:31.224 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:31.224 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:31.224 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:31.224 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:31.224 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:31.224 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:31.224 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:19:31.224 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:19:31.224 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:31.224 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:19:31.224 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:31.224 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:31.224 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:31.224 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:31.224 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:31.224 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:31.224 06:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:19:31.482 /dev/nbd1 00:19:31.482 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:31.739 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:31.739 06:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:31.739 06:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:19:31.739 06:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:31.739 06:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:31.739 06:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:31.739 06:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:19:31.739 06:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:31.739 06:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:31.739 06:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:31.739 1+0 records in 00:19:31.739 1+0 records out 00:19:31.739 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333228 s, 12.3 MB/s 00:19:31.739 06:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:31.739 06:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:19:31.739 06:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:31.739 06:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:31.739 06:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:19:31.739 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:31.739 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:31.739 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:31.739 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:19:31.740 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:31.740 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:31.740 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:31.740 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:31.740 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:31.740 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:31.996 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:31.996 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:31.996 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:31.996 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:31.996 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:31.996 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:31.996 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:31.996 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:31.996 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:31.996 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:31.996 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:31.996 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:31.996 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:31.996 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:31.996 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:32.254 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:32.254 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:32.254 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:32.254 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:32.254 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:32.254 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:32.254 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:32.254 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:32.254 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:19:32.254 06:45:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79235 00:19:32.254 06:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79235 ']' 00:19:32.254 06:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79235 00:19:32.254 06:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:19:32.254 06:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:32.254 06:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79235 00:19:32.512 killing process with pid 79235 00:19:32.512 Received shutdown signal, test time was about 11.986116 seconds 00:19:32.512 00:19:32.512 Latency(us) 00:19:32.512 [2024-12-06T06:45:51.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.512 [2024-12-06T06:45:51.159Z] =================================================================================================================== 00:19:32.512 [2024-12-06T06:45:51.159Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:32.512 06:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:32.512 06:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:32.512 06:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79235' 00:19:32.512 06:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79235 00:19:32.512 [2024-12-06 06:45:50.909512] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:32.512 06:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79235 00:19:32.770 [2024-12-06 06:45:51.292688] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:34.144 06:45:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:19:34.144 00:19:34.144 real 0m15.611s 00:19:34.144 user 0m20.426s 00:19:34.144 sys 0m1.791s 00:19:34.144 06:45:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:34.144 ************************************ 00:19:34.144 END TEST raid_rebuild_test_io 00:19:34.144 06:45:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:34.144 ************************************ 00:19:34.144 06:45:52 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:19:34.144 06:45:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:34.144 06:45:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:34.144 06:45:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:34.144 ************************************ 00:19:34.144 START TEST raid_rebuild_test_sb_io 00:19:34.144 ************************************ 00:19:34.144 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:19:34.144 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:34.144 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:34.144 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:34.144 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:19:34.144 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:34.144 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:34.144 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:34.144 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:34.144 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:34.144 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:34.144 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:34.144 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:34.144 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:34.144 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:34.144 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:34.144 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:34.144 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:34.144 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:34.144 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:34.145 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:34.145 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:34.145 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:34.145 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:34.145 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:34.145 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:34.145 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:34.145 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:34.145 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:34.145 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:34.145 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:34.145 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79674 00:19:34.145 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79674 00:19:34.145 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:34.145 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79674 ']' 00:19:34.145 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.145 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.145 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.145 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.145 06:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:34.145 [2024-12-06 06:45:52.582401] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:19:34.145 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:34.145 Zero copy mechanism will not be used. 00:19:34.145 [2024-12-06 06:45:52.582627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79674 ] 00:19:34.145 [2024-12-06 06:45:52.773078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.403 [2024-12-06 06:45:52.929946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.662 [2024-12-06 06:45:53.152906] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:34.662 [2024-12-06 06:45:53.152983] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:34.969 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:34.969 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:19:34.969 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:34.969 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:34.969 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.969 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:34.969 BaseBdev1_malloc 00:19:34.969 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.969 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:34.969 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.969 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:34.969 [2024-12-06 06:45:53.598278] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:34.969 [2024-12-06 06:45:53.598352] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.969 [2024-12-06 06:45:53.598384] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:34.969 [2024-12-06 06:45:53.598403] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.969 [2024-12-06 06:45:53.601134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.969 [2024-12-06 06:45:53.601183] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:34.969 BaseBdev1 00:19:34.969 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.969 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:34.969 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:34.969 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.969 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.229 BaseBdev2_malloc 00:19:35.229 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.229 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:35.229 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.229 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.229 [2024-12-06 06:45:53.651822] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:35.229 [2024-12-06 06:45:53.651903] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.229 [2024-12-06 06:45:53.651937] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:35.229 [2024-12-06 06:45:53.651957] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.229 [2024-12-06 06:45:53.654804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.229 [2024-12-06 06:45:53.654854] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:35.229 BaseBdev2 00:19:35.229 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.229 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:35.229 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:35.229 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.229 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.229 BaseBdev3_malloc 00:19:35.229 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.229 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:35.229 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.229 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.229 [2024-12-06 06:45:53.716074] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:35.229 [2024-12-06 06:45:53.716191] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.229 [2024-12-06 06:45:53.716227] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:35.229 [2024-12-06 06:45:53.716246] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.229 [2024-12-06 06:45:53.719395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.229 [2024-12-06 06:45:53.719459] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:35.229 BaseBdev3 00:19:35.229 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.229 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:35.229 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:35.229 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.230 BaseBdev4_malloc 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.230 [2024-12-06 06:45:53.773570] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:35.230 [2024-12-06 06:45:53.773659] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.230 [2024-12-06 06:45:53.773691] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:35.230 [2024-12-06 06:45:53.773710] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.230 [2024-12-06 06:45:53.776611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.230 [2024-12-06 06:45:53.776665] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:35.230 BaseBdev4 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.230 spare_malloc 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.230 spare_delay 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.230 [2024-12-06 06:45:53.834123] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:35.230 [2024-12-06 06:45:53.834187] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.230 [2024-12-06 06:45:53.834212] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:35.230 [2024-12-06 06:45:53.834229] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.230 [2024-12-06 06:45:53.837037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.230 [2024-12-06 06:45:53.837085] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:35.230 spare 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.230 [2024-12-06 06:45:53.842181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:35.230 [2024-12-06 06:45:53.844821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:35.230 [2024-12-06 06:45:53.844918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:35.230 [2024-12-06 06:45:53.845017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:35.230 [2024-12-06 06:45:53.845270] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:35.230 [2024-12-06 06:45:53.845301] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:35.230 [2024-12-06 06:45:53.845620] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:35.230 [2024-12-06 06:45:53.845857] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:35.230 [2024-12-06 06:45:53.845885] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:35.230 [2024-12-06 06:45:53.846129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.230 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.489 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.489 "name": "raid_bdev1", 00:19:35.489 "uuid": "70d116d5-1de4-42ee-bdb4-b35b318b8359", 00:19:35.489 "strip_size_kb": 0, 00:19:35.489 "state": "online", 00:19:35.489 "raid_level": "raid1", 00:19:35.489 "superblock": true, 00:19:35.489 "num_base_bdevs": 4, 00:19:35.489 "num_base_bdevs_discovered": 4, 00:19:35.489 "num_base_bdevs_operational": 4, 00:19:35.489 "base_bdevs_list": [ 00:19:35.489 { 00:19:35.489 "name": "BaseBdev1", 00:19:35.489 "uuid": "8eb1757e-6a75-5083-beed-9b8cb32715d3", 00:19:35.489 "is_configured": true, 00:19:35.489 "data_offset": 2048, 00:19:35.489 "data_size": 63488 00:19:35.489 }, 00:19:35.489 { 00:19:35.489 "name": "BaseBdev2", 00:19:35.489 "uuid": "abd02b0f-98c7-550a-9111-9e51acb0806b", 00:19:35.489 "is_configured": true, 00:19:35.489 "data_offset": 2048, 00:19:35.489 "data_size": 63488 00:19:35.489 }, 00:19:35.489 { 00:19:35.489 "name": "BaseBdev3", 00:19:35.489 "uuid": "f2ee9af5-2b11-5094-b1b0-b9be5b250233", 00:19:35.489 "is_configured": true, 00:19:35.490 "data_offset": 2048, 00:19:35.490 "data_size": 63488 00:19:35.490 }, 00:19:35.490 { 00:19:35.490 "name": "BaseBdev4", 00:19:35.490 "uuid": "194a9d42-e0b7-5f6a-9d70-9ea4430f27d3", 00:19:35.490 "is_configured": true, 00:19:35.490 "data_offset": 2048, 00:19:35.490 "data_size": 63488 00:19:35.490 } 00:19:35.490 ] 00:19:35.490 }' 00:19:35.490 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.490 06:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.749 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:35.749 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:35.749 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.749 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.749 [2024-12-06 06:45:54.394847] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:36.009 [2024-12-06 06:45:54.502393] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.009 "name": "raid_bdev1", 00:19:36.009 "uuid": "70d116d5-1de4-42ee-bdb4-b35b318b8359", 00:19:36.009 "strip_size_kb": 0, 00:19:36.009 "state": "online", 00:19:36.009 "raid_level": "raid1", 00:19:36.009 "superblock": true, 00:19:36.009 "num_base_bdevs": 4, 00:19:36.009 "num_base_bdevs_discovered": 3, 00:19:36.009 "num_base_bdevs_operational": 3, 00:19:36.009 "base_bdevs_list": [ 00:19:36.009 { 00:19:36.009 "name": null, 00:19:36.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.009 "is_configured": false, 00:19:36.009 "data_offset": 0, 00:19:36.009 "data_size": 63488 00:19:36.009 }, 00:19:36.009 { 00:19:36.009 "name": "BaseBdev2", 00:19:36.009 "uuid": "abd02b0f-98c7-550a-9111-9e51acb0806b", 00:19:36.009 "is_configured": true, 00:19:36.009 "data_offset": 2048, 00:19:36.009 "data_size": 63488 00:19:36.009 }, 00:19:36.009 { 00:19:36.009 "name": "BaseBdev3", 00:19:36.009 "uuid": "f2ee9af5-2b11-5094-b1b0-b9be5b250233", 00:19:36.009 "is_configured": true, 00:19:36.009 "data_offset": 2048, 00:19:36.009 "data_size": 63488 00:19:36.009 }, 00:19:36.009 { 00:19:36.009 "name": "BaseBdev4", 00:19:36.009 "uuid": "194a9d42-e0b7-5f6a-9d70-9ea4430f27d3", 00:19:36.009 "is_configured": true, 00:19:36.009 "data_offset": 2048, 00:19:36.009 "data_size": 63488 00:19:36.009 } 00:19:36.009 ] 00:19:36.009 }' 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.009 06:45:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:36.009 [2024-12-06 06:45:54.634698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:36.009 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:36.009 Zero copy mechanism will not be used. 00:19:36.009 Running I/O for 60 seconds... 00:19:36.575 06:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:36.575 06:45:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.575 06:45:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:36.575 [2024-12-06 06:45:55.045165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:36.575 06:45:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.575 06:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:36.575 [2024-12-06 06:45:55.105427] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:19:36.575 [2024-12-06 06:45:55.108050] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:36.833 [2024-12-06 06:45:55.229608] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:36.833 [2024-12-06 06:45:55.230112] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:36.833 [2024-12-06 06:45:55.365960] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:37.091 [2024-12-06 06:45:55.621891] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:37.091 [2024-12-06 06:45:55.622571] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:37.349 154.00 IOPS, 462.00 MiB/s [2024-12-06T06:45:55.996Z] [2024-12-06 06:45:55.782581] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:37.349 [2024-12-06 06:45:55.783004] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:37.608 [2024-12-06 06:45:56.048399] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:37.608 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:37.608 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:37.608 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:37.608 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:37.608 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:37.608 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.608 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.608 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.608 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:37.608 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.608 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:37.608 "name": "raid_bdev1", 00:19:37.608 "uuid": "70d116d5-1de4-42ee-bdb4-b35b318b8359", 00:19:37.608 "strip_size_kb": 0, 00:19:37.608 "state": "online", 00:19:37.608 "raid_level": "raid1", 00:19:37.608 "superblock": true, 00:19:37.608 "num_base_bdevs": 4, 00:19:37.608 "num_base_bdevs_discovered": 4, 00:19:37.608 "num_base_bdevs_operational": 4, 00:19:37.608 "process": { 00:19:37.608 "type": "rebuild", 00:19:37.608 "target": "spare", 00:19:37.608 "progress": { 00:19:37.608 "blocks": 14336, 00:19:37.608 "percent": 22 00:19:37.608 } 00:19:37.608 }, 00:19:37.608 "base_bdevs_list": [ 00:19:37.608 { 00:19:37.608 "name": "spare", 00:19:37.608 "uuid": "29db02a8-ca5a-528d-ac0a-815e90b74f3e", 00:19:37.608 "is_configured": true, 00:19:37.608 "data_offset": 2048, 00:19:37.608 "data_size": 63488 00:19:37.608 }, 00:19:37.608 { 00:19:37.608 "name": "BaseBdev2", 00:19:37.608 "uuid": "abd02b0f-98c7-550a-9111-9e51acb0806b", 00:19:37.608 "is_configured": true, 00:19:37.608 "data_offset": 2048, 00:19:37.608 "data_size": 63488 00:19:37.608 }, 00:19:37.608 { 00:19:37.608 "name": "BaseBdev3", 00:19:37.608 "uuid": "f2ee9af5-2b11-5094-b1b0-b9be5b250233", 00:19:37.608 "is_configured": true, 00:19:37.608 "data_offset": 2048, 00:19:37.608 "data_size": 63488 00:19:37.608 }, 00:19:37.608 { 00:19:37.608 "name": "BaseBdev4", 00:19:37.608 "uuid": "194a9d42-e0b7-5f6a-9d70-9ea4430f27d3", 00:19:37.608 "is_configured": true, 00:19:37.608 "data_offset": 2048, 00:19:37.608 "data_size": 63488 00:19:37.608 } 00:19:37.608 ] 00:19:37.608 }' 00:19:37.608 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:37.608 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:37.608 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:37.608 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:37.608 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:37.608 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.608 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:37.608 [2024-12-06 06:45:56.245575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:37.867 [2024-12-06 06:45:56.313384] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:37.867 [2024-12-06 06:45:56.317367] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:37.867 [2024-12-06 06:45:56.317438] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:37.867 [2024-12-06 06:45:56.317460] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:37.867 [2024-12-06 06:45:56.365902] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:19:37.867 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.867 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:37.867 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:37.867 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:37.867 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:37.868 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:37.868 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:37.868 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.868 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.868 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.868 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.868 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.868 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.868 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.868 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:37.868 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.868 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.868 "name": "raid_bdev1", 00:19:37.868 "uuid": "70d116d5-1de4-42ee-bdb4-b35b318b8359", 00:19:37.868 "strip_size_kb": 0, 00:19:37.868 "state": "online", 00:19:37.868 "raid_level": "raid1", 00:19:37.868 "superblock": true, 00:19:37.868 "num_base_bdevs": 4, 00:19:37.868 "num_base_bdevs_discovered": 3, 00:19:37.868 "num_base_bdevs_operational": 3, 00:19:37.868 "base_bdevs_list": [ 00:19:37.868 { 00:19:37.868 "name": null, 00:19:37.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.868 "is_configured": false, 00:19:37.868 "data_offset": 0, 00:19:37.868 "data_size": 63488 00:19:37.868 }, 00:19:37.868 { 00:19:37.868 "name": "BaseBdev2", 00:19:37.868 "uuid": "abd02b0f-98c7-550a-9111-9e51acb0806b", 00:19:37.868 "is_configured": true, 00:19:37.868 "data_offset": 2048, 00:19:37.868 "data_size": 63488 00:19:37.868 }, 00:19:37.868 { 00:19:37.868 "name": "BaseBdev3", 00:19:37.868 "uuid": "f2ee9af5-2b11-5094-b1b0-b9be5b250233", 00:19:37.868 "is_configured": true, 00:19:37.868 "data_offset": 2048, 00:19:37.868 "data_size": 63488 00:19:37.868 }, 00:19:37.868 { 00:19:37.868 "name": "BaseBdev4", 00:19:37.868 "uuid": "194a9d42-e0b7-5f6a-9d70-9ea4430f27d3", 00:19:37.868 "is_configured": true, 00:19:37.868 "data_offset": 2048, 00:19:37.868 "data_size": 63488 00:19:37.868 } 00:19:37.868 ] 00:19:37.868 }' 00:19:37.868 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.868 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:38.386 132.00 IOPS, 396.00 MiB/s [2024-12-06T06:45:57.033Z] 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:38.386 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:38.386 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:38.386 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:38.387 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:38.387 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.387 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.387 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:38.387 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.387 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.387 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:38.387 "name": "raid_bdev1", 00:19:38.387 "uuid": "70d116d5-1de4-42ee-bdb4-b35b318b8359", 00:19:38.387 "strip_size_kb": 0, 00:19:38.387 "state": "online", 00:19:38.387 "raid_level": "raid1", 00:19:38.387 "superblock": true, 00:19:38.387 "num_base_bdevs": 4, 00:19:38.387 "num_base_bdevs_discovered": 3, 00:19:38.387 "num_base_bdevs_operational": 3, 00:19:38.387 "base_bdevs_list": [ 00:19:38.387 { 00:19:38.387 "name": null, 00:19:38.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.387 "is_configured": false, 00:19:38.387 "data_offset": 0, 00:19:38.387 "data_size": 63488 00:19:38.387 }, 00:19:38.387 { 00:19:38.387 "name": "BaseBdev2", 00:19:38.387 "uuid": "abd02b0f-98c7-550a-9111-9e51acb0806b", 00:19:38.387 "is_configured": true, 00:19:38.387 "data_offset": 2048, 00:19:38.387 "data_size": 63488 00:19:38.387 }, 00:19:38.387 { 00:19:38.387 "name": "BaseBdev3", 00:19:38.387 "uuid": "f2ee9af5-2b11-5094-b1b0-b9be5b250233", 00:19:38.387 "is_configured": true, 00:19:38.387 "data_offset": 2048, 00:19:38.387 "data_size": 63488 00:19:38.387 }, 00:19:38.387 { 00:19:38.387 "name": "BaseBdev4", 00:19:38.387 "uuid": "194a9d42-e0b7-5f6a-9d70-9ea4430f27d3", 00:19:38.387 "is_configured": true, 00:19:38.387 "data_offset": 2048, 00:19:38.387 "data_size": 63488 00:19:38.387 } 00:19:38.387 ] 00:19:38.387 }' 00:19:38.387 06:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.387 06:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:38.387 06:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.739 06:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:38.739 06:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:38.739 06:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.739 06:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:38.739 [2024-12-06 06:45:57.085409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:38.739 06:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.739 06:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:38.739 [2024-12-06 06:45:57.168807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:38.739 [2024-12-06 06:45:57.171424] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:39.025 [2024-12-06 06:45:57.452009] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:39.025 [2024-12-06 06:45:57.452947] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:39.283 130.33 IOPS, 391.00 MiB/s [2024-12-06T06:45:57.930Z] [2024-12-06 06:45:57.838926] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:39.541 [2024-12-06 06:45:57.960406] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:39.541 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:39.541 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:39.541 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:39.541 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:39.541 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:39.541 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.541 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.541 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.541 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:39.541 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.800 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:39.800 "name": "raid_bdev1", 00:19:39.800 "uuid": "70d116d5-1de4-42ee-bdb4-b35b318b8359", 00:19:39.800 "strip_size_kb": 0, 00:19:39.800 "state": "online", 00:19:39.800 "raid_level": "raid1", 00:19:39.800 "superblock": true, 00:19:39.800 "num_base_bdevs": 4, 00:19:39.800 "num_base_bdevs_discovered": 4, 00:19:39.800 "num_base_bdevs_operational": 4, 00:19:39.800 "process": { 00:19:39.800 "type": "rebuild", 00:19:39.800 "target": "spare", 00:19:39.800 "progress": { 00:19:39.800 "blocks": 12288, 00:19:39.800 "percent": 19 00:19:39.800 } 00:19:39.800 }, 00:19:39.800 "base_bdevs_list": [ 00:19:39.800 { 00:19:39.800 "name": "spare", 00:19:39.800 "uuid": "29db02a8-ca5a-528d-ac0a-815e90b74f3e", 00:19:39.800 "is_configured": true, 00:19:39.800 "data_offset": 2048, 00:19:39.800 "data_size": 63488 00:19:39.800 }, 00:19:39.800 { 00:19:39.800 "name": "BaseBdev2", 00:19:39.800 "uuid": "abd02b0f-98c7-550a-9111-9e51acb0806b", 00:19:39.800 "is_configured": true, 00:19:39.800 "data_offset": 2048, 00:19:39.800 "data_size": 63488 00:19:39.800 }, 00:19:39.800 { 00:19:39.800 "name": "BaseBdev3", 00:19:39.800 "uuid": "f2ee9af5-2b11-5094-b1b0-b9be5b250233", 00:19:39.800 "is_configured": true, 00:19:39.800 "data_offset": 2048, 00:19:39.800 "data_size": 63488 00:19:39.800 }, 00:19:39.800 { 00:19:39.800 "name": "BaseBdev4", 00:19:39.800 "uuid": "194a9d42-e0b7-5f6a-9d70-9ea4430f27d3", 00:19:39.800 "is_configured": true, 00:19:39.800 "data_offset": 2048, 00:19:39.800 "data_size": 63488 00:19:39.800 } 00:19:39.800 ] 00:19:39.800 }' 00:19:39.800 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:39.800 [2024-12-06 06:45:58.209953] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:39.800 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:39.800 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:39.800 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:39.800 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:39.800 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:39.800 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:39.800 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:39.800 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:39.800 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:19:39.800 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:39.800 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.800 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:39.800 [2024-12-06 06:45:58.282088] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:40.059 [2024-12-06 06:45:58.461208] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:40.059 [2024-12-06 06:45:58.571585] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:19:40.059 [2024-12-06 06:45:58.571665] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:19:40.059 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.059 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:19:40.059 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:19:40.059 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:40.059 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:40.059 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:40.059 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:40.059 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:40.059 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.059 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.059 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.059 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:40.059 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.059 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:40.059 "name": "raid_bdev1", 00:19:40.059 "uuid": "70d116d5-1de4-42ee-bdb4-b35b318b8359", 00:19:40.059 "strip_size_kb": 0, 00:19:40.059 "state": "online", 00:19:40.059 "raid_level": "raid1", 00:19:40.059 "superblock": true, 00:19:40.059 "num_base_bdevs": 4, 00:19:40.059 "num_base_bdevs_discovered": 3, 00:19:40.059 "num_base_bdevs_operational": 3, 00:19:40.059 "process": { 00:19:40.059 "type": "rebuild", 00:19:40.059 "target": "spare", 00:19:40.059 "progress": { 00:19:40.059 "blocks": 16384, 00:19:40.059 "percent": 25 00:19:40.059 } 00:19:40.059 }, 00:19:40.059 "base_bdevs_list": [ 00:19:40.059 { 00:19:40.059 "name": "spare", 00:19:40.059 "uuid": "29db02a8-ca5a-528d-ac0a-815e90b74f3e", 00:19:40.059 "is_configured": true, 00:19:40.059 "data_offset": 2048, 00:19:40.059 "data_size": 63488 00:19:40.059 }, 00:19:40.059 { 00:19:40.059 "name": null, 00:19:40.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.059 "is_configured": false, 00:19:40.059 "data_offset": 0, 00:19:40.059 "data_size": 63488 00:19:40.059 }, 00:19:40.059 { 00:19:40.059 "name": "BaseBdev3", 00:19:40.059 "uuid": "f2ee9af5-2b11-5094-b1b0-b9be5b250233", 00:19:40.059 "is_configured": true, 00:19:40.059 "data_offset": 2048, 00:19:40.059 "data_size": 63488 00:19:40.059 }, 00:19:40.059 { 00:19:40.059 "name": "BaseBdev4", 00:19:40.059 "uuid": "194a9d42-e0b7-5f6a-9d70-9ea4430f27d3", 00:19:40.059 "is_configured": true, 00:19:40.059 "data_offset": 2048, 00:19:40.059 "data_size": 63488 00:19:40.059 } 00:19:40.059 ] 00:19:40.059 }' 00:19:40.059 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:40.059 116.00 IOPS, 348.00 MiB/s [2024-12-06T06:45:58.706Z] 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:40.059 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:40.318 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:40.318 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=538 00:19:40.318 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:40.318 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:40.318 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:40.318 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:40.318 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:40.318 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:40.318 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.318 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.318 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.318 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:40.318 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.318 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:40.318 "name": "raid_bdev1", 00:19:40.318 "uuid": "70d116d5-1de4-42ee-bdb4-b35b318b8359", 00:19:40.318 "strip_size_kb": 0, 00:19:40.318 "state": "online", 00:19:40.318 "raid_level": "raid1", 00:19:40.318 "superblock": true, 00:19:40.318 "num_base_bdevs": 4, 00:19:40.318 "num_base_bdevs_discovered": 3, 00:19:40.318 "num_base_bdevs_operational": 3, 00:19:40.318 "process": { 00:19:40.318 "type": "rebuild", 00:19:40.318 "target": "spare", 00:19:40.318 "progress": { 00:19:40.318 "blocks": 18432, 00:19:40.318 "percent": 29 00:19:40.318 } 00:19:40.318 }, 00:19:40.318 "base_bdevs_list": [ 00:19:40.318 { 00:19:40.318 "name": "spare", 00:19:40.318 "uuid": "29db02a8-ca5a-528d-ac0a-815e90b74f3e", 00:19:40.318 "is_configured": true, 00:19:40.318 "data_offset": 2048, 00:19:40.318 "data_size": 63488 00:19:40.318 }, 00:19:40.318 { 00:19:40.318 "name": null, 00:19:40.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.318 "is_configured": false, 00:19:40.318 "data_offset": 0, 00:19:40.318 "data_size": 63488 00:19:40.318 }, 00:19:40.318 { 00:19:40.318 "name": "BaseBdev3", 00:19:40.318 "uuid": "f2ee9af5-2b11-5094-b1b0-b9be5b250233", 00:19:40.318 "is_configured": true, 00:19:40.318 "data_offset": 2048, 00:19:40.318 "data_size": 63488 00:19:40.318 }, 00:19:40.318 { 00:19:40.318 "name": "BaseBdev4", 00:19:40.318 "uuid": "194a9d42-e0b7-5f6a-9d70-9ea4430f27d3", 00:19:40.318 "is_configured": true, 00:19:40.318 "data_offset": 2048, 00:19:40.318 "data_size": 63488 00:19:40.318 } 00:19:40.318 ] 00:19:40.318 }' 00:19:40.318 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:40.318 [2024-12-06 06:45:58.825930] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:40.318 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:40.318 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:40.318 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:40.318 06:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:40.885 [2024-12-06 06:45:59.291390] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:19:41.163 [2024-12-06 06:45:59.534258] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:19:41.163 [2024-12-06 06:45:59.534741] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:19:41.163 [2024-12-06 06:45:59.648707] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:19:41.422 104.40 IOPS, 313.20 MiB/s [2024-12-06T06:46:00.069Z] 06:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:41.422 06:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:41.422 06:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:41.422 06:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:41.422 06:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:41.423 06:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:41.423 06:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.423 06:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.423 06:45:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.423 06:45:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:41.423 06:45:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.423 [2024-12-06 06:45:59.954700] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:19:41.423 06:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:41.423 "name": "raid_bdev1", 00:19:41.423 "uuid": "70d116d5-1de4-42ee-bdb4-b35b318b8359", 00:19:41.423 "strip_size_kb": 0, 00:19:41.423 "state": "online", 00:19:41.423 "raid_level": "raid1", 00:19:41.423 "superblock": true, 00:19:41.423 "num_base_bdevs": 4, 00:19:41.423 "num_base_bdevs_discovered": 3, 00:19:41.423 "num_base_bdevs_operational": 3, 00:19:41.423 "process": { 00:19:41.423 "type": "rebuild", 00:19:41.423 "target": "spare", 00:19:41.423 "progress": { 00:19:41.423 "blocks": 38912, 00:19:41.423 "percent": 61 00:19:41.423 } 00:19:41.423 }, 00:19:41.423 "base_bdevs_list": [ 00:19:41.423 { 00:19:41.423 "name": "spare", 00:19:41.423 "uuid": "29db02a8-ca5a-528d-ac0a-815e90b74f3e", 00:19:41.423 "is_configured": true, 00:19:41.423 "data_offset": 2048, 00:19:41.423 "data_size": 63488 00:19:41.423 }, 00:19:41.423 { 00:19:41.423 "name": null, 00:19:41.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.423 "is_configured": false, 00:19:41.423 "data_offset": 0, 00:19:41.423 "data_size": 63488 00:19:41.423 }, 00:19:41.423 { 00:19:41.423 "name": "BaseBdev3", 00:19:41.423 "uuid": "f2ee9af5-2b11-5094-b1b0-b9be5b250233", 00:19:41.423 "is_configured": true, 00:19:41.423 "data_offset": 2048, 00:19:41.423 "data_size": 63488 00:19:41.423 }, 00:19:41.423 { 00:19:41.423 "name": "BaseBdev4", 00:19:41.423 "uuid": "194a9d42-e0b7-5f6a-9d70-9ea4430f27d3", 00:19:41.423 "is_configured": true, 00:19:41.423 "data_offset": 2048, 00:19:41.423 "data_size": 63488 00:19:41.423 } 00:19:41.423 ] 00:19:41.423 }' 00:19:41.423 06:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:41.423 06:46:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:41.423 06:46:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:41.423 06:46:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:41.423 06:46:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:41.681 [2024-12-06 06:46:00.286953] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:19:41.682 [2024-12-06 06:46:00.288117] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:19:41.941 [2024-12-06 06:46:00.525318] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:19:42.459 93.50 IOPS, 280.50 MiB/s [2024-12-06T06:46:01.106Z] 06:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:42.460 06:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:42.460 06:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:42.460 06:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:42.460 06:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:42.460 06:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:42.460 06:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.460 06:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.460 06:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.460 06:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:42.460 06:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.758 06:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:42.758 "name": "raid_bdev1", 00:19:42.758 "uuid": "70d116d5-1de4-42ee-bdb4-b35b318b8359", 00:19:42.758 "strip_size_kb": 0, 00:19:42.758 "state": "online", 00:19:42.758 "raid_level": "raid1", 00:19:42.758 "superblock": true, 00:19:42.758 "num_base_bdevs": 4, 00:19:42.758 "num_base_bdevs_discovered": 3, 00:19:42.758 "num_base_bdevs_operational": 3, 00:19:42.758 "process": { 00:19:42.758 "type": "rebuild", 00:19:42.758 "target": "spare", 00:19:42.758 "progress": { 00:19:42.758 "blocks": 53248, 00:19:42.758 "percent": 83 00:19:42.758 } 00:19:42.758 }, 00:19:42.758 "base_bdevs_list": [ 00:19:42.758 { 00:19:42.758 "name": "spare", 00:19:42.758 "uuid": "29db02a8-ca5a-528d-ac0a-815e90b74f3e", 00:19:42.758 "is_configured": true, 00:19:42.758 "data_offset": 2048, 00:19:42.758 "data_size": 63488 00:19:42.758 }, 00:19:42.758 { 00:19:42.758 "name": null, 00:19:42.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.758 "is_configured": false, 00:19:42.758 "data_offset": 0, 00:19:42.758 "data_size": 63488 00:19:42.758 }, 00:19:42.758 { 00:19:42.758 "name": "BaseBdev3", 00:19:42.758 "uuid": "f2ee9af5-2b11-5094-b1b0-b9be5b250233", 00:19:42.758 "is_configured": true, 00:19:42.758 "data_offset": 2048, 00:19:42.758 "data_size": 63488 00:19:42.758 }, 00:19:42.758 { 00:19:42.758 "name": "BaseBdev4", 00:19:42.758 "uuid": "194a9d42-e0b7-5f6a-9d70-9ea4430f27d3", 00:19:42.758 "is_configured": true, 00:19:42.758 "data_offset": 2048, 00:19:42.758 "data_size": 63488 00:19:42.758 } 00:19:42.758 ] 00:19:42.758 }' 00:19:42.758 06:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:42.758 06:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:42.758 06:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:42.758 06:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:42.758 06:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:42.758 [2024-12-06 06:46:01.313863] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:19:43.016 [2024-12-06 06:46:01.546201] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:43.016 [2024-12-06 06:46:01.646187] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:43.016 [2024-12-06 06:46:01.657808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:43.843 86.29 IOPS, 258.86 MiB/s [2024-12-06T06:46:02.490Z] 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:43.843 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:43.843 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:43.843 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:43.843 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:43.843 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:43.843 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.844 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.844 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.844 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:43.844 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.844 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:43.844 "name": "raid_bdev1", 00:19:43.844 "uuid": "70d116d5-1de4-42ee-bdb4-b35b318b8359", 00:19:43.844 "strip_size_kb": 0, 00:19:43.844 "state": "online", 00:19:43.844 "raid_level": "raid1", 00:19:43.844 "superblock": true, 00:19:43.844 "num_base_bdevs": 4, 00:19:43.844 "num_base_bdevs_discovered": 3, 00:19:43.844 "num_base_bdevs_operational": 3, 00:19:43.844 "base_bdevs_list": [ 00:19:43.844 { 00:19:43.844 "name": "spare", 00:19:43.844 "uuid": "29db02a8-ca5a-528d-ac0a-815e90b74f3e", 00:19:43.844 "is_configured": true, 00:19:43.844 "data_offset": 2048, 00:19:43.844 "data_size": 63488 00:19:43.844 }, 00:19:43.844 { 00:19:43.844 "name": null, 00:19:43.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.844 "is_configured": false, 00:19:43.844 "data_offset": 0, 00:19:43.844 "data_size": 63488 00:19:43.844 }, 00:19:43.844 { 00:19:43.844 "name": "BaseBdev3", 00:19:43.844 "uuid": "f2ee9af5-2b11-5094-b1b0-b9be5b250233", 00:19:43.844 "is_configured": true, 00:19:43.844 "data_offset": 2048, 00:19:43.844 "data_size": 63488 00:19:43.844 }, 00:19:43.844 { 00:19:43.844 "name": "BaseBdev4", 00:19:43.844 "uuid": "194a9d42-e0b7-5f6a-9d70-9ea4430f27d3", 00:19:43.844 "is_configured": true, 00:19:43.844 "data_offset": 2048, 00:19:43.844 "data_size": 63488 00:19:43.844 } 00:19:43.844 ] 00:19:43.844 }' 00:19:43.844 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:43.844 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:43.844 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:43.844 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:43.844 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:19:43.844 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:43.844 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:43.844 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:43.844 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:43.844 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:43.844 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.844 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.844 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:43.844 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.844 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.844 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:43.844 "name": "raid_bdev1", 00:19:43.844 "uuid": "70d116d5-1de4-42ee-bdb4-b35b318b8359", 00:19:43.844 "strip_size_kb": 0, 00:19:43.844 "state": "online", 00:19:43.844 "raid_level": "raid1", 00:19:43.844 "superblock": true, 00:19:43.844 "num_base_bdevs": 4, 00:19:43.844 "num_base_bdevs_discovered": 3, 00:19:43.844 "num_base_bdevs_operational": 3, 00:19:43.844 "base_bdevs_list": [ 00:19:43.844 { 00:19:43.844 "name": "spare", 00:19:43.844 "uuid": "29db02a8-ca5a-528d-ac0a-815e90b74f3e", 00:19:43.844 "is_configured": true, 00:19:43.844 "data_offset": 2048, 00:19:43.844 "data_size": 63488 00:19:43.844 }, 00:19:43.844 { 00:19:43.844 "name": null, 00:19:43.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.844 "is_configured": false, 00:19:43.844 "data_offset": 0, 00:19:43.844 "data_size": 63488 00:19:43.844 }, 00:19:43.844 { 00:19:43.844 "name": "BaseBdev3", 00:19:43.844 "uuid": "f2ee9af5-2b11-5094-b1b0-b9be5b250233", 00:19:43.844 "is_configured": true, 00:19:43.844 "data_offset": 2048, 00:19:43.844 "data_size": 63488 00:19:43.844 }, 00:19:43.844 { 00:19:43.844 "name": "BaseBdev4", 00:19:43.844 "uuid": "194a9d42-e0b7-5f6a-9d70-9ea4430f27d3", 00:19:43.844 "is_configured": true, 00:19:43.844 "data_offset": 2048, 00:19:43.844 "data_size": 63488 00:19:43.844 } 00:19:43.844 ] 00:19:43.844 }' 00:19:43.844 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:44.102 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:44.102 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:44.102 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:44.102 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:44.102 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:44.102 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:44.102 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:44.102 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:44.102 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:44.102 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:44.102 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:44.102 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:44.102 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:44.102 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.102 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.102 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.102 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:44.102 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.102 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:44.102 "name": "raid_bdev1", 00:19:44.102 "uuid": "70d116d5-1de4-42ee-bdb4-b35b318b8359", 00:19:44.102 "strip_size_kb": 0, 00:19:44.102 "state": "online", 00:19:44.102 "raid_level": "raid1", 00:19:44.102 "superblock": true, 00:19:44.102 "num_base_bdevs": 4, 00:19:44.102 "num_base_bdevs_discovered": 3, 00:19:44.102 "num_base_bdevs_operational": 3, 00:19:44.102 "base_bdevs_list": [ 00:19:44.102 { 00:19:44.102 "name": "spare", 00:19:44.102 "uuid": "29db02a8-ca5a-528d-ac0a-815e90b74f3e", 00:19:44.102 "is_configured": true, 00:19:44.102 "data_offset": 2048, 00:19:44.102 "data_size": 63488 00:19:44.102 }, 00:19:44.102 { 00:19:44.102 "name": null, 00:19:44.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.102 "is_configured": false, 00:19:44.103 "data_offset": 0, 00:19:44.103 "data_size": 63488 00:19:44.103 }, 00:19:44.103 { 00:19:44.103 "name": "BaseBdev3", 00:19:44.103 "uuid": "f2ee9af5-2b11-5094-b1b0-b9be5b250233", 00:19:44.103 "is_configured": true, 00:19:44.103 "data_offset": 2048, 00:19:44.103 "data_size": 63488 00:19:44.103 }, 00:19:44.103 { 00:19:44.103 "name": "BaseBdev4", 00:19:44.103 "uuid": "194a9d42-e0b7-5f6a-9d70-9ea4430f27d3", 00:19:44.103 "is_configured": true, 00:19:44.103 "data_offset": 2048, 00:19:44.103 "data_size": 63488 00:19:44.103 } 00:19:44.103 ] 00:19:44.103 }' 00:19:44.103 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:44.103 06:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:44.669 81.75 IOPS, 245.25 MiB/s [2024-12-06T06:46:03.316Z] 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:44.669 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.669 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:44.669 [2024-12-06 06:46:03.103224] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:44.669 [2024-12-06 06:46:03.103264] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:44.669 00:19:44.669 Latency(us) 00:19:44.669 [2024-12-06T06:46:03.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.669 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:44.669 raid_bdev1 : 8.57 77.87 233.61 0.00 0.00 18174.82 314.65 120586.24 00:19:44.669 [2024-12-06T06:46:03.316Z] =================================================================================================================== 00:19:44.669 [2024-12-06T06:46:03.316Z] Total : 77.87 233.61 0.00 0.00 18174.82 314.65 120586.24 00:19:44.669 [2024-12-06 06:46:03.223354] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:44.669 [2024-12-06 06:46:03.223454] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:44.670 [2024-12-06 06:46:03.223646] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:44.670 [2024-12-06 06:46:03.223671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:44.670 { 00:19:44.670 "results": [ 00:19:44.670 { 00:19:44.670 "job": "raid_bdev1", 00:19:44.670 "core_mask": "0x1", 00:19:44.670 "workload": "randrw", 00:19:44.670 "percentage": 50, 00:19:44.670 "status": "finished", 00:19:44.670 "queue_depth": 2, 00:19:44.670 "io_size": 3145728, 00:19:44.670 "runtime": 8.565527, 00:19:44.670 "iops": 77.87028165342308, 00:19:44.670 "mibps": 233.61084496026922, 00:19:44.670 "io_failed": 0, 00:19:44.670 "io_timeout": 0, 00:19:44.670 "avg_latency_us": 18174.824849393488, 00:19:44.670 "min_latency_us": 314.6472727272727, 00:19:44.670 "max_latency_us": 120586.24 00:19:44.670 } 00:19:44.670 ], 00:19:44.670 "core_count": 1 00:19:44.670 } 00:19:44.670 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.670 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.670 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.670 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:19:44.670 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:44.670 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.670 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:44.670 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:44.670 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:19:44.670 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:19:44.670 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:44.670 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:19:44.670 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:44.670 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:44.670 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:44.670 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:44.670 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:44.670 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:44.670 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:19:45.236 /dev/nbd0 00:19:45.236 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:45.236 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:45.236 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:45.236 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:19:45.236 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:45.236 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:45.236 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:45.236 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:19:45.236 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:45.236 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:45.236 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:45.236 1+0 records in 00:19:45.236 1+0 records out 00:19:45.236 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367006 s, 11.2 MB/s 00:19:45.236 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:45.236 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:19:45.236 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:45.236 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:45.236 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:19:45.236 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:45.236 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:45.236 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:45.236 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:19:45.236 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:19:45.236 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:45.236 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:19:45.236 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:19:45.236 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:45.236 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:19:45.236 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:45.236 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:45.236 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:45.237 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:45.237 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:45.237 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:45.237 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:19:45.495 /dev/nbd1 00:19:45.495 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:45.495 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:45.495 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:45.495 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:19:45.495 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:45.495 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:45.495 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:45.495 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:19:45.495 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:45.495 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:45.495 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:45.495 1+0 records in 00:19:45.495 1+0 records out 00:19:45.495 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428133 s, 9.6 MB/s 00:19:45.495 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:45.495 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:19:45.495 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:45.495 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:45.495 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:19:45.495 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:45.495 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:45.495 06:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:45.754 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:19:45.754 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:45.754 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:45.754 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:45.754 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:45.754 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:45.754 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:46.013 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:46.013 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:46.014 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:46.014 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:46.014 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:46.014 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:46.014 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:46.014 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:46.014 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:46.014 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:19:46.014 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:19:46.014 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:46.014 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:19:46.014 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:46.014 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:46.014 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:46.014 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:46.014 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:46.014 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:46.014 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:19:46.272 /dev/nbd1 00:19:46.272 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:46.272 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:46.272 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:46.272 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:19:46.272 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:46.272 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:46.272 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:46.273 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:19:46.273 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:46.273 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:46.273 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:46.273 1+0 records in 00:19:46.273 1+0 records out 00:19:46.273 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433219 s, 9.5 MB/s 00:19:46.273 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:46.273 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:19:46.273 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:46.273 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:46.273 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:19:46.273 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:46.273 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:46.273 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:46.273 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:19:46.273 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:46.273 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:46.273 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:46.273 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:46.273 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:46.273 06:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:46.532 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:46.532 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:46.532 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:46.532 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:46.532 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:46.532 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:46.532 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:46.532 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:46.532 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:46.532 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:46.532 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:46.532 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:46.532 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:46.532 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:46.532 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:46.790 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:46.790 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:46.790 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:46.790 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:46.790 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:46.790 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:46.790 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:46.790 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:46.790 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:46.790 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:46.790 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.790 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.790 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.790 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:46.790 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.790 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.790 [2024-12-06 06:46:05.429302] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:46.790 [2024-12-06 06:46:05.429371] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.790 [2024-12-06 06:46:05.429407] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:46.790 [2024-12-06 06:46:05.429424] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.790 [2024-12-06 06:46:05.432589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.790 [2024-12-06 06:46:05.432639] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:46.790 [2024-12-06 06:46:05.432748] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:46.790 [2024-12-06 06:46:05.432822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:46.790 [2024-12-06 06:46:05.433023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:46.790 [2024-12-06 06:46:05.433187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:46.790 spare 00:19:46.790 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.790 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:46.790 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.790 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:47.048 [2024-12-06 06:46:05.533315] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:47.048 [2024-12-06 06:46:05.533386] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:47.048 [2024-12-06 06:46:05.533856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:19:47.048 [2024-12-06 06:46:05.534127] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:47.048 [2024-12-06 06:46:05.534151] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:47.048 [2024-12-06 06:46:05.534412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.048 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.048 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:47.048 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:47.048 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:47.048 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:47.048 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:47.048 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:47.048 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.048 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.048 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.048 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.048 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.048 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.048 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.048 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:47.048 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.048 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.048 "name": "raid_bdev1", 00:19:47.048 "uuid": "70d116d5-1de4-42ee-bdb4-b35b318b8359", 00:19:47.048 "strip_size_kb": 0, 00:19:47.048 "state": "online", 00:19:47.049 "raid_level": "raid1", 00:19:47.049 "superblock": true, 00:19:47.049 "num_base_bdevs": 4, 00:19:47.049 "num_base_bdevs_discovered": 3, 00:19:47.049 "num_base_bdevs_operational": 3, 00:19:47.049 "base_bdevs_list": [ 00:19:47.049 { 00:19:47.049 "name": "spare", 00:19:47.049 "uuid": "29db02a8-ca5a-528d-ac0a-815e90b74f3e", 00:19:47.049 "is_configured": true, 00:19:47.049 "data_offset": 2048, 00:19:47.049 "data_size": 63488 00:19:47.049 }, 00:19:47.049 { 00:19:47.049 "name": null, 00:19:47.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.049 "is_configured": false, 00:19:47.049 "data_offset": 2048, 00:19:47.049 "data_size": 63488 00:19:47.049 }, 00:19:47.049 { 00:19:47.049 "name": "BaseBdev3", 00:19:47.049 "uuid": "f2ee9af5-2b11-5094-b1b0-b9be5b250233", 00:19:47.049 "is_configured": true, 00:19:47.049 "data_offset": 2048, 00:19:47.049 "data_size": 63488 00:19:47.049 }, 00:19:47.049 { 00:19:47.049 "name": "BaseBdev4", 00:19:47.049 "uuid": "194a9d42-e0b7-5f6a-9d70-9ea4430f27d3", 00:19:47.049 "is_configured": true, 00:19:47.049 "data_offset": 2048, 00:19:47.049 "data_size": 63488 00:19:47.049 } 00:19:47.049 ] 00:19:47.049 }' 00:19:47.049 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.049 06:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:47.615 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:47.615 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.615 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:47.615 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:47.615 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.615 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.615 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.615 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:47.615 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.615 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.615 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.615 "name": "raid_bdev1", 00:19:47.615 "uuid": "70d116d5-1de4-42ee-bdb4-b35b318b8359", 00:19:47.615 "strip_size_kb": 0, 00:19:47.615 "state": "online", 00:19:47.615 "raid_level": "raid1", 00:19:47.615 "superblock": true, 00:19:47.615 "num_base_bdevs": 4, 00:19:47.615 "num_base_bdevs_discovered": 3, 00:19:47.615 "num_base_bdevs_operational": 3, 00:19:47.615 "base_bdevs_list": [ 00:19:47.615 { 00:19:47.615 "name": "spare", 00:19:47.615 "uuid": "29db02a8-ca5a-528d-ac0a-815e90b74f3e", 00:19:47.615 "is_configured": true, 00:19:47.615 "data_offset": 2048, 00:19:47.615 "data_size": 63488 00:19:47.615 }, 00:19:47.615 { 00:19:47.615 "name": null, 00:19:47.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.615 "is_configured": false, 00:19:47.615 "data_offset": 2048, 00:19:47.615 "data_size": 63488 00:19:47.615 }, 00:19:47.615 { 00:19:47.615 "name": "BaseBdev3", 00:19:47.615 "uuid": "f2ee9af5-2b11-5094-b1b0-b9be5b250233", 00:19:47.615 "is_configured": true, 00:19:47.615 "data_offset": 2048, 00:19:47.615 "data_size": 63488 00:19:47.615 }, 00:19:47.615 { 00:19:47.615 "name": "BaseBdev4", 00:19:47.615 "uuid": "194a9d42-e0b7-5f6a-9d70-9ea4430f27d3", 00:19:47.615 "is_configured": true, 00:19:47.615 "data_offset": 2048, 00:19:47.615 "data_size": 63488 00:19:47.615 } 00:19:47.615 ] 00:19:47.615 }' 00:19:47.615 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.615 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:47.615 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.615 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:47.615 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.615 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:47.615 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.615 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:47.615 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.615 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:47.615 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:47.615 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.615 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:47.615 [2024-12-06 06:46:06.258697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:47.873 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.873 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:47.873 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:47.873 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:47.873 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:47.873 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:47.873 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:47.873 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.873 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.873 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.873 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.873 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.873 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.873 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.873 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:47.873 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.873 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.873 "name": "raid_bdev1", 00:19:47.873 "uuid": "70d116d5-1de4-42ee-bdb4-b35b318b8359", 00:19:47.873 "strip_size_kb": 0, 00:19:47.873 "state": "online", 00:19:47.873 "raid_level": "raid1", 00:19:47.873 "superblock": true, 00:19:47.873 "num_base_bdevs": 4, 00:19:47.873 "num_base_bdevs_discovered": 2, 00:19:47.873 "num_base_bdevs_operational": 2, 00:19:47.873 "base_bdevs_list": [ 00:19:47.873 { 00:19:47.873 "name": null, 00:19:47.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.873 "is_configured": false, 00:19:47.873 "data_offset": 0, 00:19:47.873 "data_size": 63488 00:19:47.873 }, 00:19:47.873 { 00:19:47.873 "name": null, 00:19:47.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.873 "is_configured": false, 00:19:47.873 "data_offset": 2048, 00:19:47.873 "data_size": 63488 00:19:47.873 }, 00:19:47.873 { 00:19:47.873 "name": "BaseBdev3", 00:19:47.873 "uuid": "f2ee9af5-2b11-5094-b1b0-b9be5b250233", 00:19:47.873 "is_configured": true, 00:19:47.873 "data_offset": 2048, 00:19:47.873 "data_size": 63488 00:19:47.873 }, 00:19:47.873 { 00:19:47.873 "name": "BaseBdev4", 00:19:47.873 "uuid": "194a9d42-e0b7-5f6a-9d70-9ea4430f27d3", 00:19:47.873 "is_configured": true, 00:19:47.873 "data_offset": 2048, 00:19:47.873 "data_size": 63488 00:19:47.873 } 00:19:47.873 ] 00:19:47.873 }' 00:19:47.873 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.873 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:48.132 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:48.132 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.132 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:48.132 [2024-12-06 06:46:06.746986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:48.132 [2024-12-06 06:46:06.747365] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:19:48.132 [2024-12-06 06:46:06.747403] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:48.132 [2024-12-06 06:46:06.747453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:48.132 [2024-12-06 06:46:06.761340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:19:48.132 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.132 06:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:48.132 [2024-12-06 06:46:06.763845] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:49.506 06:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:49.506 06:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:49.506 06:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:49.506 06:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:49.506 06:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:49.506 06:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.506 06:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.506 06:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.506 06:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:49.506 06:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.506 06:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:49.506 "name": "raid_bdev1", 00:19:49.506 "uuid": "70d116d5-1de4-42ee-bdb4-b35b318b8359", 00:19:49.506 "strip_size_kb": 0, 00:19:49.506 "state": "online", 00:19:49.506 "raid_level": "raid1", 00:19:49.506 "superblock": true, 00:19:49.506 "num_base_bdevs": 4, 00:19:49.506 "num_base_bdevs_discovered": 3, 00:19:49.506 "num_base_bdevs_operational": 3, 00:19:49.506 "process": { 00:19:49.506 "type": "rebuild", 00:19:49.506 "target": "spare", 00:19:49.506 "progress": { 00:19:49.506 "blocks": 20480, 00:19:49.506 "percent": 32 00:19:49.506 } 00:19:49.506 }, 00:19:49.506 "base_bdevs_list": [ 00:19:49.506 { 00:19:49.506 "name": "spare", 00:19:49.506 "uuid": "29db02a8-ca5a-528d-ac0a-815e90b74f3e", 00:19:49.506 "is_configured": true, 00:19:49.506 "data_offset": 2048, 00:19:49.506 "data_size": 63488 00:19:49.506 }, 00:19:49.506 { 00:19:49.506 "name": null, 00:19:49.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.506 "is_configured": false, 00:19:49.506 "data_offset": 2048, 00:19:49.506 "data_size": 63488 00:19:49.506 }, 00:19:49.506 { 00:19:49.506 "name": "BaseBdev3", 00:19:49.506 "uuid": "f2ee9af5-2b11-5094-b1b0-b9be5b250233", 00:19:49.506 "is_configured": true, 00:19:49.506 "data_offset": 2048, 00:19:49.506 "data_size": 63488 00:19:49.506 }, 00:19:49.506 { 00:19:49.506 "name": "BaseBdev4", 00:19:49.506 "uuid": "194a9d42-e0b7-5f6a-9d70-9ea4430f27d3", 00:19:49.506 "is_configured": true, 00:19:49.506 "data_offset": 2048, 00:19:49.506 "data_size": 63488 00:19:49.506 } 00:19:49.506 ] 00:19:49.506 }' 00:19:49.506 06:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:49.506 06:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:49.506 06:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:49.506 06:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:49.506 06:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:49.506 06:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.506 06:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:49.506 [2024-12-06 06:46:07.933186] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:49.506 [2024-12-06 06:46:07.972984] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:49.506 [2024-12-06 06:46:07.973290] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:49.506 [2024-12-06 06:46:07.973424] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:49.506 [2024-12-06 06:46:07.973469] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:49.506 06:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.506 06:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:49.506 06:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:49.506 06:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:49.506 06:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:49.506 06:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:49.506 06:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:49.506 06:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:49.506 06:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:49.506 06:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:49.506 06:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:49.506 06:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.506 06:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.506 06:46:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.506 06:46:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:49.506 06:46:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.506 06:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:49.506 "name": "raid_bdev1", 00:19:49.506 "uuid": "70d116d5-1de4-42ee-bdb4-b35b318b8359", 00:19:49.506 "strip_size_kb": 0, 00:19:49.506 "state": "online", 00:19:49.506 "raid_level": "raid1", 00:19:49.506 "superblock": true, 00:19:49.506 "num_base_bdevs": 4, 00:19:49.506 "num_base_bdevs_discovered": 2, 00:19:49.506 "num_base_bdevs_operational": 2, 00:19:49.506 "base_bdevs_list": [ 00:19:49.506 { 00:19:49.506 "name": null, 00:19:49.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.506 "is_configured": false, 00:19:49.506 "data_offset": 0, 00:19:49.506 "data_size": 63488 00:19:49.506 }, 00:19:49.506 { 00:19:49.506 "name": null, 00:19:49.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.506 "is_configured": false, 00:19:49.506 "data_offset": 2048, 00:19:49.506 "data_size": 63488 00:19:49.506 }, 00:19:49.506 { 00:19:49.506 "name": "BaseBdev3", 00:19:49.506 "uuid": "f2ee9af5-2b11-5094-b1b0-b9be5b250233", 00:19:49.506 "is_configured": true, 00:19:49.506 "data_offset": 2048, 00:19:49.506 "data_size": 63488 00:19:49.506 }, 00:19:49.506 { 00:19:49.506 "name": "BaseBdev4", 00:19:49.506 "uuid": "194a9d42-e0b7-5f6a-9d70-9ea4430f27d3", 00:19:49.506 "is_configured": true, 00:19:49.506 "data_offset": 2048, 00:19:49.506 "data_size": 63488 00:19:49.506 } 00:19:49.506 ] 00:19:49.506 }' 00:19:49.506 06:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:49.506 06:46:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:50.074 06:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:50.074 06:46:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.074 06:46:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:50.074 [2024-12-06 06:46:08.537020] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:50.074 [2024-12-06 06:46:08.537234] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:50.074 [2024-12-06 06:46:08.537285] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:19:50.074 [2024-12-06 06:46:08.537304] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:50.074 [2024-12-06 06:46:08.537951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:50.074 [2024-12-06 06:46:08.537989] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:50.074 [2024-12-06 06:46:08.538127] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:50.074 [2024-12-06 06:46:08.538152] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:19:50.074 [2024-12-06 06:46:08.538167] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:50.074 [2024-12-06 06:46:08.538201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:50.074 [2024-12-06 06:46:08.552297] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:19:50.074 spare 00:19:50.074 06:46:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.074 06:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:50.074 [2024-12-06 06:46:08.555014] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:51.086 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:51.086 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:51.086 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:51.086 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:51.086 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:51.086 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.086 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.086 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:51.086 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.086 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.087 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:51.087 "name": "raid_bdev1", 00:19:51.087 "uuid": "70d116d5-1de4-42ee-bdb4-b35b318b8359", 00:19:51.087 "strip_size_kb": 0, 00:19:51.087 "state": "online", 00:19:51.087 "raid_level": "raid1", 00:19:51.087 "superblock": true, 00:19:51.087 "num_base_bdevs": 4, 00:19:51.087 "num_base_bdevs_discovered": 3, 00:19:51.087 "num_base_bdevs_operational": 3, 00:19:51.087 "process": { 00:19:51.087 "type": "rebuild", 00:19:51.087 "target": "spare", 00:19:51.087 "progress": { 00:19:51.087 "blocks": 20480, 00:19:51.087 "percent": 32 00:19:51.087 } 00:19:51.087 }, 00:19:51.087 "base_bdevs_list": [ 00:19:51.087 { 00:19:51.087 "name": "spare", 00:19:51.087 "uuid": "29db02a8-ca5a-528d-ac0a-815e90b74f3e", 00:19:51.087 "is_configured": true, 00:19:51.087 "data_offset": 2048, 00:19:51.087 "data_size": 63488 00:19:51.087 }, 00:19:51.087 { 00:19:51.087 "name": null, 00:19:51.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.087 "is_configured": false, 00:19:51.087 "data_offset": 2048, 00:19:51.087 "data_size": 63488 00:19:51.087 }, 00:19:51.087 { 00:19:51.087 "name": "BaseBdev3", 00:19:51.087 "uuid": "f2ee9af5-2b11-5094-b1b0-b9be5b250233", 00:19:51.087 "is_configured": true, 00:19:51.087 "data_offset": 2048, 00:19:51.087 "data_size": 63488 00:19:51.087 }, 00:19:51.087 { 00:19:51.087 "name": "BaseBdev4", 00:19:51.087 "uuid": "194a9d42-e0b7-5f6a-9d70-9ea4430f27d3", 00:19:51.087 "is_configured": true, 00:19:51.087 "data_offset": 2048, 00:19:51.087 "data_size": 63488 00:19:51.087 } 00:19:51.087 ] 00:19:51.087 }' 00:19:51.087 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:51.087 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:51.087 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:51.087 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:51.087 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:51.087 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.087 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:51.087 [2024-12-06 06:46:09.728574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:51.345 [2024-12-06 06:46:09.764543] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:51.345 [2024-12-06 06:46:09.764635] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:51.345 [2024-12-06 06:46:09.764668] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:51.345 [2024-12-06 06:46:09.764680] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:51.345 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.345 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:51.345 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:51.345 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:51.345 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:51.345 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:51.345 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:51.345 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.345 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.345 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.345 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.345 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.345 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.345 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.345 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:51.345 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.345 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.345 "name": "raid_bdev1", 00:19:51.345 "uuid": "70d116d5-1de4-42ee-bdb4-b35b318b8359", 00:19:51.345 "strip_size_kb": 0, 00:19:51.345 "state": "online", 00:19:51.345 "raid_level": "raid1", 00:19:51.345 "superblock": true, 00:19:51.345 "num_base_bdevs": 4, 00:19:51.345 "num_base_bdevs_discovered": 2, 00:19:51.345 "num_base_bdevs_operational": 2, 00:19:51.345 "base_bdevs_list": [ 00:19:51.345 { 00:19:51.345 "name": null, 00:19:51.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.345 "is_configured": false, 00:19:51.345 "data_offset": 0, 00:19:51.345 "data_size": 63488 00:19:51.345 }, 00:19:51.345 { 00:19:51.345 "name": null, 00:19:51.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.345 "is_configured": false, 00:19:51.345 "data_offset": 2048, 00:19:51.345 "data_size": 63488 00:19:51.345 }, 00:19:51.345 { 00:19:51.345 "name": "BaseBdev3", 00:19:51.345 "uuid": "f2ee9af5-2b11-5094-b1b0-b9be5b250233", 00:19:51.345 "is_configured": true, 00:19:51.345 "data_offset": 2048, 00:19:51.345 "data_size": 63488 00:19:51.345 }, 00:19:51.345 { 00:19:51.345 "name": "BaseBdev4", 00:19:51.345 "uuid": "194a9d42-e0b7-5f6a-9d70-9ea4430f27d3", 00:19:51.345 "is_configured": true, 00:19:51.345 "data_offset": 2048, 00:19:51.345 "data_size": 63488 00:19:51.345 } 00:19:51.345 ] 00:19:51.345 }' 00:19:51.345 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.345 06:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:51.911 06:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:51.911 06:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:51.911 06:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:51.911 06:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:51.911 06:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:51.911 06:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.911 06:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.911 06:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.911 06:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:51.911 06:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.911 06:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:51.911 "name": "raid_bdev1", 00:19:51.911 "uuid": "70d116d5-1de4-42ee-bdb4-b35b318b8359", 00:19:51.911 "strip_size_kb": 0, 00:19:51.912 "state": "online", 00:19:51.912 "raid_level": "raid1", 00:19:51.912 "superblock": true, 00:19:51.912 "num_base_bdevs": 4, 00:19:51.912 "num_base_bdevs_discovered": 2, 00:19:51.912 "num_base_bdevs_operational": 2, 00:19:51.912 "base_bdevs_list": [ 00:19:51.912 { 00:19:51.912 "name": null, 00:19:51.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.912 "is_configured": false, 00:19:51.912 "data_offset": 0, 00:19:51.912 "data_size": 63488 00:19:51.912 }, 00:19:51.912 { 00:19:51.912 "name": null, 00:19:51.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.912 "is_configured": false, 00:19:51.912 "data_offset": 2048, 00:19:51.912 "data_size": 63488 00:19:51.912 }, 00:19:51.912 { 00:19:51.912 "name": "BaseBdev3", 00:19:51.912 "uuid": "f2ee9af5-2b11-5094-b1b0-b9be5b250233", 00:19:51.912 "is_configured": true, 00:19:51.912 "data_offset": 2048, 00:19:51.912 "data_size": 63488 00:19:51.912 }, 00:19:51.912 { 00:19:51.912 "name": "BaseBdev4", 00:19:51.912 "uuid": "194a9d42-e0b7-5f6a-9d70-9ea4430f27d3", 00:19:51.912 "is_configured": true, 00:19:51.912 "data_offset": 2048, 00:19:51.912 "data_size": 63488 00:19:51.912 } 00:19:51.912 ] 00:19:51.912 }' 00:19:51.912 06:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:51.912 06:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:51.912 06:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:51.912 06:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:51.912 06:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:51.912 06:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.912 06:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:51.912 06:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.912 06:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:51.912 06:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.912 06:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:51.912 [2024-12-06 06:46:10.455328] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:51.912 [2024-12-06 06:46:10.455400] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:51.912 [2024-12-06 06:46:10.455435] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:19:51.912 [2024-12-06 06:46:10.455450] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:51.912 [2024-12-06 06:46:10.456064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:51.912 [2024-12-06 06:46:10.456096] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:51.912 [2024-12-06 06:46:10.456202] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:51.912 [2024-12-06 06:46:10.456223] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:19:51.912 [2024-12-06 06:46:10.456237] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:51.912 [2024-12-06 06:46:10.456253] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:51.912 BaseBdev1 00:19:51.912 06:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.912 06:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:52.847 06:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:52.847 06:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:52.847 06:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:52.847 06:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:52.847 06:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:52.847 06:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:52.847 06:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.847 06:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.847 06:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.847 06:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.847 06:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.847 06:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.847 06:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:52.847 06:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.847 06:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.105 06:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:53.105 "name": "raid_bdev1", 00:19:53.105 "uuid": "70d116d5-1de4-42ee-bdb4-b35b318b8359", 00:19:53.105 "strip_size_kb": 0, 00:19:53.105 "state": "online", 00:19:53.105 "raid_level": "raid1", 00:19:53.105 "superblock": true, 00:19:53.105 "num_base_bdevs": 4, 00:19:53.105 "num_base_bdevs_discovered": 2, 00:19:53.105 "num_base_bdevs_operational": 2, 00:19:53.105 "base_bdevs_list": [ 00:19:53.105 { 00:19:53.105 "name": null, 00:19:53.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.105 "is_configured": false, 00:19:53.105 "data_offset": 0, 00:19:53.105 "data_size": 63488 00:19:53.105 }, 00:19:53.105 { 00:19:53.105 "name": null, 00:19:53.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.105 "is_configured": false, 00:19:53.105 "data_offset": 2048, 00:19:53.105 "data_size": 63488 00:19:53.105 }, 00:19:53.105 { 00:19:53.105 "name": "BaseBdev3", 00:19:53.105 "uuid": "f2ee9af5-2b11-5094-b1b0-b9be5b250233", 00:19:53.105 "is_configured": true, 00:19:53.105 "data_offset": 2048, 00:19:53.105 "data_size": 63488 00:19:53.105 }, 00:19:53.105 { 00:19:53.105 "name": "BaseBdev4", 00:19:53.105 "uuid": "194a9d42-e0b7-5f6a-9d70-9ea4430f27d3", 00:19:53.105 "is_configured": true, 00:19:53.105 "data_offset": 2048, 00:19:53.105 "data_size": 63488 00:19:53.105 } 00:19:53.105 ] 00:19:53.105 }' 00:19:53.105 06:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:53.105 06:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:53.363 06:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:53.363 06:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:53.363 06:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:53.363 06:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:53.363 06:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:53.363 06:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.363 06:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.364 06:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.364 06:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:53.364 06:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.364 06:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:53.364 "name": "raid_bdev1", 00:19:53.364 "uuid": "70d116d5-1de4-42ee-bdb4-b35b318b8359", 00:19:53.364 "strip_size_kb": 0, 00:19:53.364 "state": "online", 00:19:53.364 "raid_level": "raid1", 00:19:53.364 "superblock": true, 00:19:53.364 "num_base_bdevs": 4, 00:19:53.364 "num_base_bdevs_discovered": 2, 00:19:53.364 "num_base_bdevs_operational": 2, 00:19:53.364 "base_bdevs_list": [ 00:19:53.364 { 00:19:53.364 "name": null, 00:19:53.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.364 "is_configured": false, 00:19:53.364 "data_offset": 0, 00:19:53.364 "data_size": 63488 00:19:53.364 }, 00:19:53.364 { 00:19:53.364 "name": null, 00:19:53.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.364 "is_configured": false, 00:19:53.364 "data_offset": 2048, 00:19:53.364 "data_size": 63488 00:19:53.364 }, 00:19:53.364 { 00:19:53.364 "name": "BaseBdev3", 00:19:53.364 "uuid": "f2ee9af5-2b11-5094-b1b0-b9be5b250233", 00:19:53.364 "is_configured": true, 00:19:53.364 "data_offset": 2048, 00:19:53.364 "data_size": 63488 00:19:53.364 }, 00:19:53.364 { 00:19:53.364 "name": "BaseBdev4", 00:19:53.364 "uuid": "194a9d42-e0b7-5f6a-9d70-9ea4430f27d3", 00:19:53.364 "is_configured": true, 00:19:53.364 "data_offset": 2048, 00:19:53.364 "data_size": 63488 00:19:53.364 } 00:19:53.364 ] 00:19:53.364 }' 00:19:53.364 06:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:53.622 06:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:53.622 06:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:53.622 06:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:53.622 06:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:53.622 06:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:19:53.622 06:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:53.622 06:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:53.622 06:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:53.622 06:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:53.622 06:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:53.622 06:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:53.622 06:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.622 06:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:53.622 [2024-12-06 06:46:12.124351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:53.622 [2024-12-06 06:46:12.124727] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:19:53.622 [2024-12-06 06:46:12.124760] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:53.622 request: 00:19:53.622 { 00:19:53.622 "base_bdev": "BaseBdev1", 00:19:53.622 "raid_bdev": "raid_bdev1", 00:19:53.622 "method": "bdev_raid_add_base_bdev", 00:19:53.622 "req_id": 1 00:19:53.622 } 00:19:53.622 Got JSON-RPC error response 00:19:53.622 response: 00:19:53.622 { 00:19:53.622 "code": -22, 00:19:53.622 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:53.622 } 00:19:53.622 06:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:53.623 06:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:19:53.623 06:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:53.623 06:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:53.623 06:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:53.623 06:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:54.557 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:54.557 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:54.557 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:54.557 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:54.557 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:54.557 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:54.557 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:54.557 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:54.557 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:54.557 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:54.557 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.557 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.557 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:54.557 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.557 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.557 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:54.557 "name": "raid_bdev1", 00:19:54.557 "uuid": "70d116d5-1de4-42ee-bdb4-b35b318b8359", 00:19:54.557 "strip_size_kb": 0, 00:19:54.557 "state": "online", 00:19:54.557 "raid_level": "raid1", 00:19:54.557 "superblock": true, 00:19:54.557 "num_base_bdevs": 4, 00:19:54.557 "num_base_bdevs_discovered": 2, 00:19:54.557 "num_base_bdevs_operational": 2, 00:19:54.557 "base_bdevs_list": [ 00:19:54.557 { 00:19:54.557 "name": null, 00:19:54.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.557 "is_configured": false, 00:19:54.557 "data_offset": 0, 00:19:54.557 "data_size": 63488 00:19:54.557 }, 00:19:54.557 { 00:19:54.558 "name": null, 00:19:54.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.558 "is_configured": false, 00:19:54.558 "data_offset": 2048, 00:19:54.558 "data_size": 63488 00:19:54.558 }, 00:19:54.558 { 00:19:54.558 "name": "BaseBdev3", 00:19:54.558 "uuid": "f2ee9af5-2b11-5094-b1b0-b9be5b250233", 00:19:54.558 "is_configured": true, 00:19:54.558 "data_offset": 2048, 00:19:54.558 "data_size": 63488 00:19:54.558 }, 00:19:54.558 { 00:19:54.558 "name": "BaseBdev4", 00:19:54.558 "uuid": "194a9d42-e0b7-5f6a-9d70-9ea4430f27d3", 00:19:54.558 "is_configured": true, 00:19:54.558 "data_offset": 2048, 00:19:54.558 "data_size": 63488 00:19:54.558 } 00:19:54.558 ] 00:19:54.558 }' 00:19:54.558 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:54.558 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:55.123 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:55.124 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:55.124 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:55.124 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:55.124 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:55.124 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.124 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.124 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:55.124 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.124 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.124 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:55.124 "name": "raid_bdev1", 00:19:55.124 "uuid": "70d116d5-1de4-42ee-bdb4-b35b318b8359", 00:19:55.124 "strip_size_kb": 0, 00:19:55.124 "state": "online", 00:19:55.124 "raid_level": "raid1", 00:19:55.124 "superblock": true, 00:19:55.124 "num_base_bdevs": 4, 00:19:55.124 "num_base_bdevs_discovered": 2, 00:19:55.124 "num_base_bdevs_operational": 2, 00:19:55.124 "base_bdevs_list": [ 00:19:55.124 { 00:19:55.124 "name": null, 00:19:55.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.124 "is_configured": false, 00:19:55.124 "data_offset": 0, 00:19:55.124 "data_size": 63488 00:19:55.124 }, 00:19:55.124 { 00:19:55.124 "name": null, 00:19:55.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.124 "is_configured": false, 00:19:55.124 "data_offset": 2048, 00:19:55.124 "data_size": 63488 00:19:55.124 }, 00:19:55.124 { 00:19:55.124 "name": "BaseBdev3", 00:19:55.124 "uuid": "f2ee9af5-2b11-5094-b1b0-b9be5b250233", 00:19:55.124 "is_configured": true, 00:19:55.124 "data_offset": 2048, 00:19:55.124 "data_size": 63488 00:19:55.124 }, 00:19:55.124 { 00:19:55.124 "name": "BaseBdev4", 00:19:55.124 "uuid": "194a9d42-e0b7-5f6a-9d70-9ea4430f27d3", 00:19:55.124 "is_configured": true, 00:19:55.124 "data_offset": 2048, 00:19:55.124 "data_size": 63488 00:19:55.124 } 00:19:55.124 ] 00:19:55.124 }' 00:19:55.124 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:55.124 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:55.124 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:55.382 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:55.382 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79674 00:19:55.382 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79674 ']' 00:19:55.382 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79674 00:19:55.382 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:19:55.382 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:55.382 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79674 00:19:55.382 killing process with pid 79674 00:19:55.382 Received shutdown signal, test time was about 19.181967 seconds 00:19:55.382 00:19:55.382 Latency(us) 00:19:55.382 [2024-12-06T06:46:14.029Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.382 [2024-12-06T06:46:14.029Z] =================================================================================================================== 00:19:55.382 [2024-12-06T06:46:14.029Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:55.382 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:55.382 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:55.382 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79674' 00:19:55.382 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79674 00:19:55.382 06:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79674 00:19:55.382 [2024-12-06 06:46:13.819714] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:55.382 [2024-12-06 06:46:13.819967] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:55.382 [2024-12-06 06:46:13.820075] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:55.382 [2024-12-06 06:46:13.820098] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:55.640 [2024-12-06 06:46:14.201266] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:57.011 06:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:19:57.011 00:19:57.011 real 0m22.850s 00:19:57.011 user 0m31.221s 00:19:57.011 sys 0m2.265s 00:19:57.011 06:46:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:57.011 06:46:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:57.011 ************************************ 00:19:57.011 END TEST raid_rebuild_test_sb_io 00:19:57.011 ************************************ 00:19:57.011 06:46:15 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:19:57.011 06:46:15 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:19:57.011 06:46:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:57.011 06:46:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:57.011 06:46:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:57.011 ************************************ 00:19:57.011 START TEST raid5f_state_function_test 00:19:57.011 ************************************ 00:19:57.011 06:46:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:19:57.012 Process raid pid: 80402 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80402 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80402' 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80402 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80402 ']' 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:57.012 06:46:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.012 [2024-12-06 06:46:15.492935] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:19:57.012 [2024-12-06 06:46:15.493094] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.269 [2024-12-06 06:46:15.664286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.269 [2024-12-06 06:46:15.794766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.526 [2024-12-06 06:46:16.002538] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:57.526 [2024-12-06 06:46:16.002594] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:57.783 06:46:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:57.783 06:46:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:19:57.783 06:46:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:57.783 06:46:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.783 06:46:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.783 [2024-12-06 06:46:16.407802] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:57.783 [2024-12-06 06:46:16.407875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:57.783 [2024-12-06 06:46:16.407894] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:57.783 [2024-12-06 06:46:16.407911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:57.783 [2024-12-06 06:46:16.407921] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:57.783 [2024-12-06 06:46:16.407937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:57.783 06:46:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.783 06:46:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:57.783 06:46:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:57.783 06:46:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:57.783 06:46:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:57.783 06:46:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:57.783 06:46:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:57.783 06:46:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.783 06:46:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.783 06:46:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.783 06:46:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.783 06:46:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.783 06:46:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.783 06:46:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.783 06:46:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:58.041 06:46:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.041 06:46:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.041 "name": "Existed_Raid", 00:19:58.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.041 "strip_size_kb": 64, 00:19:58.041 "state": "configuring", 00:19:58.041 "raid_level": "raid5f", 00:19:58.041 "superblock": false, 00:19:58.041 "num_base_bdevs": 3, 00:19:58.041 "num_base_bdevs_discovered": 0, 00:19:58.041 "num_base_bdevs_operational": 3, 00:19:58.041 "base_bdevs_list": [ 00:19:58.041 { 00:19:58.041 "name": "BaseBdev1", 00:19:58.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.041 "is_configured": false, 00:19:58.041 "data_offset": 0, 00:19:58.041 "data_size": 0 00:19:58.041 }, 00:19:58.041 { 00:19:58.041 "name": "BaseBdev2", 00:19:58.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.041 "is_configured": false, 00:19:58.041 "data_offset": 0, 00:19:58.041 "data_size": 0 00:19:58.041 }, 00:19:58.041 { 00:19:58.041 "name": "BaseBdev3", 00:19:58.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.041 "is_configured": false, 00:19:58.041 "data_offset": 0, 00:19:58.041 "data_size": 0 00:19:58.041 } 00:19:58.041 ] 00:19:58.041 }' 00:19:58.041 06:46:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.041 06:46:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.299 06:46:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:58.299 06:46:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.299 06:46:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.299 [2024-12-06 06:46:16.915869] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:58.299 [2024-12-06 06:46:16.915915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:58.299 06:46:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.299 06:46:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:58.299 06:46:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.299 06:46:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.299 [2024-12-06 06:46:16.923861] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:58.299 [2024-12-06 06:46:16.923922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:58.299 [2024-12-06 06:46:16.923939] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:58.299 [2024-12-06 06:46:16.923955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:58.299 [2024-12-06 06:46:16.923966] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:58.299 [2024-12-06 06:46:16.923980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:58.299 06:46:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.299 06:46:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:58.299 06:46:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.299 06:46:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.559 [2024-12-06 06:46:16.970951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:58.559 BaseBdev1 00:19:58.559 06:46:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.559 06:46:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:58.559 06:46:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:58.559 06:46:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:58.559 06:46:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:58.559 06:46:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:58.559 06:46:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:58.559 06:46:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:58.559 06:46:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.559 06:46:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.559 06:46:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.559 06:46:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:58.559 06:46:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.559 06:46:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.559 [ 00:19:58.559 { 00:19:58.559 "name": "BaseBdev1", 00:19:58.559 "aliases": [ 00:19:58.559 "7f17cf3f-5659-4f92-a2ec-59f00a50591b" 00:19:58.559 ], 00:19:58.559 "product_name": "Malloc disk", 00:19:58.559 "block_size": 512, 00:19:58.559 "num_blocks": 65536, 00:19:58.559 "uuid": "7f17cf3f-5659-4f92-a2ec-59f00a50591b", 00:19:58.559 "assigned_rate_limits": { 00:19:58.559 "rw_ios_per_sec": 0, 00:19:58.559 "rw_mbytes_per_sec": 0, 00:19:58.559 "r_mbytes_per_sec": 0, 00:19:58.559 "w_mbytes_per_sec": 0 00:19:58.559 }, 00:19:58.559 "claimed": true, 00:19:58.559 "claim_type": "exclusive_write", 00:19:58.559 "zoned": false, 00:19:58.559 "supported_io_types": { 00:19:58.559 "read": true, 00:19:58.559 "write": true, 00:19:58.559 "unmap": true, 00:19:58.559 "flush": true, 00:19:58.559 "reset": true, 00:19:58.559 "nvme_admin": false, 00:19:58.559 "nvme_io": false, 00:19:58.559 "nvme_io_md": false, 00:19:58.559 "write_zeroes": true, 00:19:58.559 "zcopy": true, 00:19:58.559 "get_zone_info": false, 00:19:58.559 "zone_management": false, 00:19:58.559 "zone_append": false, 00:19:58.559 "compare": false, 00:19:58.559 "compare_and_write": false, 00:19:58.559 "abort": true, 00:19:58.559 "seek_hole": false, 00:19:58.559 "seek_data": false, 00:19:58.559 "copy": true, 00:19:58.559 "nvme_iov_md": false 00:19:58.559 }, 00:19:58.559 "memory_domains": [ 00:19:58.559 { 00:19:58.559 "dma_device_id": "system", 00:19:58.559 "dma_device_type": 1 00:19:58.559 }, 00:19:58.559 { 00:19:58.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:58.559 "dma_device_type": 2 00:19:58.559 } 00:19:58.559 ], 00:19:58.559 "driver_specific": {} 00:19:58.559 } 00:19:58.559 ] 00:19:58.559 06:46:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.559 06:46:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:58.559 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:58.559 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:58.559 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:58.559 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:58.559 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:58.559 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:58.559 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.559 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.559 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.559 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.559 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.559 06:46:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.559 06:46:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.559 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:58.559 06:46:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.559 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.559 "name": "Existed_Raid", 00:19:58.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.559 "strip_size_kb": 64, 00:19:58.559 "state": "configuring", 00:19:58.559 "raid_level": "raid5f", 00:19:58.559 "superblock": false, 00:19:58.559 "num_base_bdevs": 3, 00:19:58.559 "num_base_bdevs_discovered": 1, 00:19:58.559 "num_base_bdevs_operational": 3, 00:19:58.559 "base_bdevs_list": [ 00:19:58.559 { 00:19:58.559 "name": "BaseBdev1", 00:19:58.559 "uuid": "7f17cf3f-5659-4f92-a2ec-59f00a50591b", 00:19:58.559 "is_configured": true, 00:19:58.559 "data_offset": 0, 00:19:58.559 "data_size": 65536 00:19:58.559 }, 00:19:58.559 { 00:19:58.559 "name": "BaseBdev2", 00:19:58.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.559 "is_configured": false, 00:19:58.559 "data_offset": 0, 00:19:58.559 "data_size": 0 00:19:58.559 }, 00:19:58.559 { 00:19:58.559 "name": "BaseBdev3", 00:19:58.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.559 "is_configured": false, 00:19:58.559 "data_offset": 0, 00:19:58.559 "data_size": 0 00:19:58.559 } 00:19:58.559 ] 00:19:58.559 }' 00:19:58.559 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.559 06:46:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.125 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:59.125 06:46:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.125 06:46:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.125 [2024-12-06 06:46:17.515166] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:59.125 [2024-12-06 06:46:17.515239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:59.125 06:46:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.125 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:59.125 06:46:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.125 06:46:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.125 [2024-12-06 06:46:17.523217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:59.125 [2024-12-06 06:46:17.525842] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:59.125 [2024-12-06 06:46:17.526017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:59.125 [2024-12-06 06:46:17.526142] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:59.125 [2024-12-06 06:46:17.526307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:59.125 06:46:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.125 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:59.125 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:59.125 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:59.125 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:59.125 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:59.125 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:59.125 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:59.125 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:59.125 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:59.125 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:59.125 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:59.125 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:59.125 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.125 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:59.125 06:46:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.125 06:46:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.125 06:46:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.125 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:59.125 "name": "Existed_Raid", 00:19:59.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.125 "strip_size_kb": 64, 00:19:59.125 "state": "configuring", 00:19:59.125 "raid_level": "raid5f", 00:19:59.125 "superblock": false, 00:19:59.125 "num_base_bdevs": 3, 00:19:59.125 "num_base_bdevs_discovered": 1, 00:19:59.125 "num_base_bdevs_operational": 3, 00:19:59.125 "base_bdevs_list": [ 00:19:59.125 { 00:19:59.125 "name": "BaseBdev1", 00:19:59.125 "uuid": "7f17cf3f-5659-4f92-a2ec-59f00a50591b", 00:19:59.125 "is_configured": true, 00:19:59.125 "data_offset": 0, 00:19:59.125 "data_size": 65536 00:19:59.125 }, 00:19:59.125 { 00:19:59.125 "name": "BaseBdev2", 00:19:59.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.125 "is_configured": false, 00:19:59.125 "data_offset": 0, 00:19:59.125 "data_size": 0 00:19:59.125 }, 00:19:59.125 { 00:19:59.125 "name": "BaseBdev3", 00:19:59.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.125 "is_configured": false, 00:19:59.125 "data_offset": 0, 00:19:59.125 "data_size": 0 00:19:59.125 } 00:19:59.125 ] 00:19:59.125 }' 00:19:59.125 06:46:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:59.125 06:46:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.692 [2024-12-06 06:46:18.082235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:59.692 BaseBdev2 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.692 [ 00:19:59.692 { 00:19:59.692 "name": "BaseBdev2", 00:19:59.692 "aliases": [ 00:19:59.692 "c954f909-43bf-4cc2-afb3-3e2b6a62cb54" 00:19:59.692 ], 00:19:59.692 "product_name": "Malloc disk", 00:19:59.692 "block_size": 512, 00:19:59.692 "num_blocks": 65536, 00:19:59.692 "uuid": "c954f909-43bf-4cc2-afb3-3e2b6a62cb54", 00:19:59.692 "assigned_rate_limits": { 00:19:59.692 "rw_ios_per_sec": 0, 00:19:59.692 "rw_mbytes_per_sec": 0, 00:19:59.692 "r_mbytes_per_sec": 0, 00:19:59.692 "w_mbytes_per_sec": 0 00:19:59.692 }, 00:19:59.692 "claimed": true, 00:19:59.692 "claim_type": "exclusive_write", 00:19:59.692 "zoned": false, 00:19:59.692 "supported_io_types": { 00:19:59.692 "read": true, 00:19:59.692 "write": true, 00:19:59.692 "unmap": true, 00:19:59.692 "flush": true, 00:19:59.692 "reset": true, 00:19:59.692 "nvme_admin": false, 00:19:59.692 "nvme_io": false, 00:19:59.692 "nvme_io_md": false, 00:19:59.692 "write_zeroes": true, 00:19:59.692 "zcopy": true, 00:19:59.692 "get_zone_info": false, 00:19:59.692 "zone_management": false, 00:19:59.692 "zone_append": false, 00:19:59.692 "compare": false, 00:19:59.692 "compare_and_write": false, 00:19:59.692 "abort": true, 00:19:59.692 "seek_hole": false, 00:19:59.692 "seek_data": false, 00:19:59.692 "copy": true, 00:19:59.692 "nvme_iov_md": false 00:19:59.692 }, 00:19:59.692 "memory_domains": [ 00:19:59.692 { 00:19:59.692 "dma_device_id": "system", 00:19:59.692 "dma_device_type": 1 00:19:59.692 }, 00:19:59.692 { 00:19:59.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:59.692 "dma_device_type": 2 00:19:59.692 } 00:19:59.692 ], 00:19:59.692 "driver_specific": {} 00:19:59.692 } 00:19:59.692 ] 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:59.692 "name": "Existed_Raid", 00:19:59.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.692 "strip_size_kb": 64, 00:19:59.692 "state": "configuring", 00:19:59.692 "raid_level": "raid5f", 00:19:59.692 "superblock": false, 00:19:59.692 "num_base_bdevs": 3, 00:19:59.692 "num_base_bdevs_discovered": 2, 00:19:59.692 "num_base_bdevs_operational": 3, 00:19:59.692 "base_bdevs_list": [ 00:19:59.692 { 00:19:59.692 "name": "BaseBdev1", 00:19:59.692 "uuid": "7f17cf3f-5659-4f92-a2ec-59f00a50591b", 00:19:59.692 "is_configured": true, 00:19:59.692 "data_offset": 0, 00:19:59.692 "data_size": 65536 00:19:59.692 }, 00:19:59.692 { 00:19:59.692 "name": "BaseBdev2", 00:19:59.692 "uuid": "c954f909-43bf-4cc2-afb3-3e2b6a62cb54", 00:19:59.692 "is_configured": true, 00:19:59.692 "data_offset": 0, 00:19:59.692 "data_size": 65536 00:19:59.692 }, 00:19:59.692 { 00:19:59.692 "name": "BaseBdev3", 00:19:59.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.692 "is_configured": false, 00:19:59.692 "data_offset": 0, 00:19:59.692 "data_size": 0 00:19:59.692 } 00:19:59.692 ] 00:19:59.692 }' 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:59.692 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.259 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:00.259 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.259 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.259 [2024-12-06 06:46:18.682312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:00.259 [2024-12-06 06:46:18.682557] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:00.259 [2024-12-06 06:46:18.682740] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:20:00.259 [2024-12-06 06:46:18.683217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:00.259 [2024-12-06 06:46:18.688808] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:00.259 [2024-12-06 06:46:18.688957] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:00.259 [2024-12-06 06:46:18.689511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.259 BaseBdev3 00:20:00.259 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.259 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:00.259 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:00.259 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:00.259 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:00.259 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:00.259 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:00.259 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:00.259 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.259 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.259 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.259 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:00.259 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.259 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.259 [ 00:20:00.259 { 00:20:00.259 "name": "BaseBdev3", 00:20:00.259 "aliases": [ 00:20:00.259 "a32ecb1f-2535-481a-9fb1-aafe2f1ccbf1" 00:20:00.259 ], 00:20:00.259 "product_name": "Malloc disk", 00:20:00.259 "block_size": 512, 00:20:00.259 "num_blocks": 65536, 00:20:00.259 "uuid": "a32ecb1f-2535-481a-9fb1-aafe2f1ccbf1", 00:20:00.259 "assigned_rate_limits": { 00:20:00.259 "rw_ios_per_sec": 0, 00:20:00.259 "rw_mbytes_per_sec": 0, 00:20:00.259 "r_mbytes_per_sec": 0, 00:20:00.259 "w_mbytes_per_sec": 0 00:20:00.259 }, 00:20:00.259 "claimed": true, 00:20:00.259 "claim_type": "exclusive_write", 00:20:00.259 "zoned": false, 00:20:00.259 "supported_io_types": { 00:20:00.259 "read": true, 00:20:00.259 "write": true, 00:20:00.259 "unmap": true, 00:20:00.259 "flush": true, 00:20:00.259 "reset": true, 00:20:00.259 "nvme_admin": false, 00:20:00.259 "nvme_io": false, 00:20:00.259 "nvme_io_md": false, 00:20:00.259 "write_zeroes": true, 00:20:00.259 "zcopy": true, 00:20:00.259 "get_zone_info": false, 00:20:00.259 "zone_management": false, 00:20:00.259 "zone_append": false, 00:20:00.259 "compare": false, 00:20:00.259 "compare_and_write": false, 00:20:00.259 "abort": true, 00:20:00.259 "seek_hole": false, 00:20:00.259 "seek_data": false, 00:20:00.259 "copy": true, 00:20:00.259 "nvme_iov_md": false 00:20:00.259 }, 00:20:00.259 "memory_domains": [ 00:20:00.259 { 00:20:00.259 "dma_device_id": "system", 00:20:00.259 "dma_device_type": 1 00:20:00.259 }, 00:20:00.259 { 00:20:00.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:00.259 "dma_device_type": 2 00:20:00.259 } 00:20:00.259 ], 00:20:00.259 "driver_specific": {} 00:20:00.259 } 00:20:00.259 ] 00:20:00.259 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.259 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:00.259 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:00.259 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:00.259 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:00.259 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:00.259 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:00.259 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:00.260 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:00.260 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:00.260 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.260 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.260 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.260 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.260 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.260 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:00.260 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.260 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.260 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.260 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.260 "name": "Existed_Raid", 00:20:00.260 "uuid": "8cff9f6c-b5c7-4bb6-927b-0ee9aeb43e1d", 00:20:00.260 "strip_size_kb": 64, 00:20:00.260 "state": "online", 00:20:00.260 "raid_level": "raid5f", 00:20:00.260 "superblock": false, 00:20:00.260 "num_base_bdevs": 3, 00:20:00.260 "num_base_bdevs_discovered": 3, 00:20:00.260 "num_base_bdevs_operational": 3, 00:20:00.260 "base_bdevs_list": [ 00:20:00.260 { 00:20:00.260 "name": "BaseBdev1", 00:20:00.260 "uuid": "7f17cf3f-5659-4f92-a2ec-59f00a50591b", 00:20:00.260 "is_configured": true, 00:20:00.260 "data_offset": 0, 00:20:00.260 "data_size": 65536 00:20:00.260 }, 00:20:00.260 { 00:20:00.260 "name": "BaseBdev2", 00:20:00.260 "uuid": "c954f909-43bf-4cc2-afb3-3e2b6a62cb54", 00:20:00.260 "is_configured": true, 00:20:00.260 "data_offset": 0, 00:20:00.260 "data_size": 65536 00:20:00.260 }, 00:20:00.260 { 00:20:00.260 "name": "BaseBdev3", 00:20:00.260 "uuid": "a32ecb1f-2535-481a-9fb1-aafe2f1ccbf1", 00:20:00.260 "is_configured": true, 00:20:00.260 "data_offset": 0, 00:20:00.260 "data_size": 65536 00:20:00.260 } 00:20:00.260 ] 00:20:00.260 }' 00:20:00.260 06:46:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.260 06:46:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.825 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:00.825 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:00.825 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:00.825 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:00.825 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:00.825 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:00.825 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:00.825 06:46:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.825 06:46:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.825 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:00.825 [2024-12-06 06:46:19.227672] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:00.825 06:46:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.825 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:00.825 "name": "Existed_Raid", 00:20:00.825 "aliases": [ 00:20:00.825 "8cff9f6c-b5c7-4bb6-927b-0ee9aeb43e1d" 00:20:00.825 ], 00:20:00.825 "product_name": "Raid Volume", 00:20:00.825 "block_size": 512, 00:20:00.825 "num_blocks": 131072, 00:20:00.825 "uuid": "8cff9f6c-b5c7-4bb6-927b-0ee9aeb43e1d", 00:20:00.825 "assigned_rate_limits": { 00:20:00.825 "rw_ios_per_sec": 0, 00:20:00.825 "rw_mbytes_per_sec": 0, 00:20:00.825 "r_mbytes_per_sec": 0, 00:20:00.825 "w_mbytes_per_sec": 0 00:20:00.825 }, 00:20:00.825 "claimed": false, 00:20:00.825 "zoned": false, 00:20:00.825 "supported_io_types": { 00:20:00.825 "read": true, 00:20:00.825 "write": true, 00:20:00.825 "unmap": false, 00:20:00.825 "flush": false, 00:20:00.825 "reset": true, 00:20:00.825 "nvme_admin": false, 00:20:00.825 "nvme_io": false, 00:20:00.825 "nvme_io_md": false, 00:20:00.825 "write_zeroes": true, 00:20:00.825 "zcopy": false, 00:20:00.825 "get_zone_info": false, 00:20:00.825 "zone_management": false, 00:20:00.826 "zone_append": false, 00:20:00.826 "compare": false, 00:20:00.826 "compare_and_write": false, 00:20:00.826 "abort": false, 00:20:00.826 "seek_hole": false, 00:20:00.826 "seek_data": false, 00:20:00.826 "copy": false, 00:20:00.826 "nvme_iov_md": false 00:20:00.826 }, 00:20:00.826 "driver_specific": { 00:20:00.826 "raid": { 00:20:00.826 "uuid": "8cff9f6c-b5c7-4bb6-927b-0ee9aeb43e1d", 00:20:00.826 "strip_size_kb": 64, 00:20:00.826 "state": "online", 00:20:00.826 "raid_level": "raid5f", 00:20:00.826 "superblock": false, 00:20:00.826 "num_base_bdevs": 3, 00:20:00.826 "num_base_bdevs_discovered": 3, 00:20:00.826 "num_base_bdevs_operational": 3, 00:20:00.826 "base_bdevs_list": [ 00:20:00.826 { 00:20:00.826 "name": "BaseBdev1", 00:20:00.826 "uuid": "7f17cf3f-5659-4f92-a2ec-59f00a50591b", 00:20:00.826 "is_configured": true, 00:20:00.826 "data_offset": 0, 00:20:00.826 "data_size": 65536 00:20:00.826 }, 00:20:00.826 { 00:20:00.826 "name": "BaseBdev2", 00:20:00.826 "uuid": "c954f909-43bf-4cc2-afb3-3e2b6a62cb54", 00:20:00.826 "is_configured": true, 00:20:00.826 "data_offset": 0, 00:20:00.826 "data_size": 65536 00:20:00.826 }, 00:20:00.826 { 00:20:00.826 "name": "BaseBdev3", 00:20:00.826 "uuid": "a32ecb1f-2535-481a-9fb1-aafe2f1ccbf1", 00:20:00.826 "is_configured": true, 00:20:00.826 "data_offset": 0, 00:20:00.826 "data_size": 65536 00:20:00.826 } 00:20:00.826 ] 00:20:00.826 } 00:20:00.826 } 00:20:00.826 }' 00:20:00.826 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:00.826 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:00.826 BaseBdev2 00:20:00.826 BaseBdev3' 00:20:00.826 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:00.826 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:00.826 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:00.826 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:00.826 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:00.826 06:46:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.826 06:46:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.826 06:46:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.826 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:00.826 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:00.826 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:00.826 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:00.826 06:46:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.826 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:00.826 06:46:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.826 06:46:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.084 [2024-12-06 06:46:19.535566] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:01.084 "name": "Existed_Raid", 00:20:01.084 "uuid": "8cff9f6c-b5c7-4bb6-927b-0ee9aeb43e1d", 00:20:01.084 "strip_size_kb": 64, 00:20:01.084 "state": "online", 00:20:01.084 "raid_level": "raid5f", 00:20:01.084 "superblock": false, 00:20:01.084 "num_base_bdevs": 3, 00:20:01.084 "num_base_bdevs_discovered": 2, 00:20:01.084 "num_base_bdevs_operational": 2, 00:20:01.084 "base_bdevs_list": [ 00:20:01.084 { 00:20:01.084 "name": null, 00:20:01.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.084 "is_configured": false, 00:20:01.084 "data_offset": 0, 00:20:01.084 "data_size": 65536 00:20:01.084 }, 00:20:01.084 { 00:20:01.084 "name": "BaseBdev2", 00:20:01.084 "uuid": "c954f909-43bf-4cc2-afb3-3e2b6a62cb54", 00:20:01.084 "is_configured": true, 00:20:01.084 "data_offset": 0, 00:20:01.084 "data_size": 65536 00:20:01.084 }, 00:20:01.084 { 00:20:01.084 "name": "BaseBdev3", 00:20:01.084 "uuid": "a32ecb1f-2535-481a-9fb1-aafe2f1ccbf1", 00:20:01.084 "is_configured": true, 00:20:01.084 "data_offset": 0, 00:20:01.084 "data_size": 65536 00:20:01.084 } 00:20:01.084 ] 00:20:01.084 }' 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:01.084 06:46:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.650 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:01.650 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:01.650 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.650 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.650 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:01.650 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.650 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.650 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:01.650 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:01.650 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:01.650 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.650 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.650 [2024-12-06 06:46:20.221442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:01.650 [2024-12-06 06:46:20.221791] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:01.908 [2024-12-06 06:46:20.308436] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:01.908 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.908 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:01.908 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:01.908 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:01.908 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.908 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.908 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.908 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.908 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:01.908 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:01.908 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:01.908 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.908 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.908 [2024-12-06 06:46:20.380540] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:01.909 [2024-12-06 06:46:20.380721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:01.909 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.909 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:01.909 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:01.909 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.909 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.909 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.909 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:01.909 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.909 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:01.909 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:01.909 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:20:01.909 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:01.909 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:01.909 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:01.909 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.909 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.168 BaseBdev2 00:20:02.168 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.168 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:02.168 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:02.168 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:02.168 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:02.168 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:02.168 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:02.168 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:02.168 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.168 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.168 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.169 [ 00:20:02.169 { 00:20:02.169 "name": "BaseBdev2", 00:20:02.169 "aliases": [ 00:20:02.169 "750231c9-750a-4c0b-a37c-085d43ddea38" 00:20:02.169 ], 00:20:02.169 "product_name": "Malloc disk", 00:20:02.169 "block_size": 512, 00:20:02.169 "num_blocks": 65536, 00:20:02.169 "uuid": "750231c9-750a-4c0b-a37c-085d43ddea38", 00:20:02.169 "assigned_rate_limits": { 00:20:02.169 "rw_ios_per_sec": 0, 00:20:02.169 "rw_mbytes_per_sec": 0, 00:20:02.169 "r_mbytes_per_sec": 0, 00:20:02.169 "w_mbytes_per_sec": 0 00:20:02.169 }, 00:20:02.169 "claimed": false, 00:20:02.169 "zoned": false, 00:20:02.169 "supported_io_types": { 00:20:02.169 "read": true, 00:20:02.169 "write": true, 00:20:02.169 "unmap": true, 00:20:02.169 "flush": true, 00:20:02.169 "reset": true, 00:20:02.169 "nvme_admin": false, 00:20:02.169 "nvme_io": false, 00:20:02.169 "nvme_io_md": false, 00:20:02.169 "write_zeroes": true, 00:20:02.169 "zcopy": true, 00:20:02.169 "get_zone_info": false, 00:20:02.169 "zone_management": false, 00:20:02.169 "zone_append": false, 00:20:02.169 "compare": false, 00:20:02.169 "compare_and_write": false, 00:20:02.169 "abort": true, 00:20:02.169 "seek_hole": false, 00:20:02.169 "seek_data": false, 00:20:02.169 "copy": true, 00:20:02.169 "nvme_iov_md": false 00:20:02.169 }, 00:20:02.169 "memory_domains": [ 00:20:02.169 { 00:20:02.169 "dma_device_id": "system", 00:20:02.169 "dma_device_type": 1 00:20:02.169 }, 00:20:02.169 { 00:20:02.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.169 "dma_device_type": 2 00:20:02.169 } 00:20:02.169 ], 00:20:02.169 "driver_specific": {} 00:20:02.169 } 00:20:02.169 ] 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.169 BaseBdev3 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.169 [ 00:20:02.169 { 00:20:02.169 "name": "BaseBdev3", 00:20:02.169 "aliases": [ 00:20:02.169 "829a460d-7d3f-47a8-a7c7-d178cb0fb0fc" 00:20:02.169 ], 00:20:02.169 "product_name": "Malloc disk", 00:20:02.169 "block_size": 512, 00:20:02.169 "num_blocks": 65536, 00:20:02.169 "uuid": "829a460d-7d3f-47a8-a7c7-d178cb0fb0fc", 00:20:02.169 "assigned_rate_limits": { 00:20:02.169 "rw_ios_per_sec": 0, 00:20:02.169 "rw_mbytes_per_sec": 0, 00:20:02.169 "r_mbytes_per_sec": 0, 00:20:02.169 "w_mbytes_per_sec": 0 00:20:02.169 }, 00:20:02.169 "claimed": false, 00:20:02.169 "zoned": false, 00:20:02.169 "supported_io_types": { 00:20:02.169 "read": true, 00:20:02.169 "write": true, 00:20:02.169 "unmap": true, 00:20:02.169 "flush": true, 00:20:02.169 "reset": true, 00:20:02.169 "nvme_admin": false, 00:20:02.169 "nvme_io": false, 00:20:02.169 "nvme_io_md": false, 00:20:02.169 "write_zeroes": true, 00:20:02.169 "zcopy": true, 00:20:02.169 "get_zone_info": false, 00:20:02.169 "zone_management": false, 00:20:02.169 "zone_append": false, 00:20:02.169 "compare": false, 00:20:02.169 "compare_and_write": false, 00:20:02.169 "abort": true, 00:20:02.169 "seek_hole": false, 00:20:02.169 "seek_data": false, 00:20:02.169 "copy": true, 00:20:02.169 "nvme_iov_md": false 00:20:02.169 }, 00:20:02.169 "memory_domains": [ 00:20:02.169 { 00:20:02.169 "dma_device_id": "system", 00:20:02.169 "dma_device_type": 1 00:20:02.169 }, 00:20:02.169 { 00:20:02.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.169 "dma_device_type": 2 00:20:02.169 } 00:20:02.169 ], 00:20:02.169 "driver_specific": {} 00:20:02.169 } 00:20:02.169 ] 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.169 [2024-12-06 06:46:20.679036] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:02.169 [2024-12-06 06:46:20.679229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:02.169 [2024-12-06 06:46:20.679282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:02.169 [2024-12-06 06:46:20.681750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.169 "name": "Existed_Raid", 00:20:02.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.169 "strip_size_kb": 64, 00:20:02.169 "state": "configuring", 00:20:02.169 "raid_level": "raid5f", 00:20:02.169 "superblock": false, 00:20:02.169 "num_base_bdevs": 3, 00:20:02.169 "num_base_bdevs_discovered": 2, 00:20:02.169 "num_base_bdevs_operational": 3, 00:20:02.169 "base_bdevs_list": [ 00:20:02.169 { 00:20:02.169 "name": "BaseBdev1", 00:20:02.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.169 "is_configured": false, 00:20:02.169 "data_offset": 0, 00:20:02.169 "data_size": 0 00:20:02.169 }, 00:20:02.169 { 00:20:02.169 "name": "BaseBdev2", 00:20:02.169 "uuid": "750231c9-750a-4c0b-a37c-085d43ddea38", 00:20:02.169 "is_configured": true, 00:20:02.169 "data_offset": 0, 00:20:02.169 "data_size": 65536 00:20:02.169 }, 00:20:02.169 { 00:20:02.169 "name": "BaseBdev3", 00:20:02.169 "uuid": "829a460d-7d3f-47a8-a7c7-d178cb0fb0fc", 00:20:02.169 "is_configured": true, 00:20:02.169 "data_offset": 0, 00:20:02.169 "data_size": 65536 00:20:02.169 } 00:20:02.169 ] 00:20:02.169 }' 00:20:02.169 06:46:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.170 06:46:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.761 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:02.761 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.761 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.761 [2024-12-06 06:46:21.195202] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:02.761 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.761 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:02.761 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:02.761 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:02.761 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:02.761 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:02.761 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:02.761 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.761 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.761 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.761 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.761 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.761 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.761 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.761 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.761 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.761 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.761 "name": "Existed_Raid", 00:20:02.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.761 "strip_size_kb": 64, 00:20:02.761 "state": "configuring", 00:20:02.761 "raid_level": "raid5f", 00:20:02.761 "superblock": false, 00:20:02.761 "num_base_bdevs": 3, 00:20:02.761 "num_base_bdevs_discovered": 1, 00:20:02.761 "num_base_bdevs_operational": 3, 00:20:02.761 "base_bdevs_list": [ 00:20:02.761 { 00:20:02.761 "name": "BaseBdev1", 00:20:02.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.761 "is_configured": false, 00:20:02.761 "data_offset": 0, 00:20:02.761 "data_size": 0 00:20:02.761 }, 00:20:02.761 { 00:20:02.761 "name": null, 00:20:02.761 "uuid": "750231c9-750a-4c0b-a37c-085d43ddea38", 00:20:02.762 "is_configured": false, 00:20:02.762 "data_offset": 0, 00:20:02.762 "data_size": 65536 00:20:02.762 }, 00:20:02.762 { 00:20:02.762 "name": "BaseBdev3", 00:20:02.762 "uuid": "829a460d-7d3f-47a8-a7c7-d178cb0fb0fc", 00:20:02.762 "is_configured": true, 00:20:02.762 "data_offset": 0, 00:20:02.762 "data_size": 65536 00:20:02.762 } 00:20:02.762 ] 00:20:02.762 }' 00:20:02.762 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.762 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.330 [2024-12-06 06:46:21.813602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:03.330 BaseBdev1 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.330 [ 00:20:03.330 { 00:20:03.330 "name": "BaseBdev1", 00:20:03.330 "aliases": [ 00:20:03.330 "1c09d919-ee1e-4d11-9c48-4c14623c783d" 00:20:03.330 ], 00:20:03.330 "product_name": "Malloc disk", 00:20:03.330 "block_size": 512, 00:20:03.330 "num_blocks": 65536, 00:20:03.330 "uuid": "1c09d919-ee1e-4d11-9c48-4c14623c783d", 00:20:03.330 "assigned_rate_limits": { 00:20:03.330 "rw_ios_per_sec": 0, 00:20:03.330 "rw_mbytes_per_sec": 0, 00:20:03.330 "r_mbytes_per_sec": 0, 00:20:03.330 "w_mbytes_per_sec": 0 00:20:03.330 }, 00:20:03.330 "claimed": true, 00:20:03.330 "claim_type": "exclusive_write", 00:20:03.330 "zoned": false, 00:20:03.330 "supported_io_types": { 00:20:03.330 "read": true, 00:20:03.330 "write": true, 00:20:03.330 "unmap": true, 00:20:03.330 "flush": true, 00:20:03.330 "reset": true, 00:20:03.330 "nvme_admin": false, 00:20:03.330 "nvme_io": false, 00:20:03.330 "nvme_io_md": false, 00:20:03.330 "write_zeroes": true, 00:20:03.330 "zcopy": true, 00:20:03.330 "get_zone_info": false, 00:20:03.330 "zone_management": false, 00:20:03.330 "zone_append": false, 00:20:03.330 "compare": false, 00:20:03.330 "compare_and_write": false, 00:20:03.330 "abort": true, 00:20:03.330 "seek_hole": false, 00:20:03.330 "seek_data": false, 00:20:03.330 "copy": true, 00:20:03.330 "nvme_iov_md": false 00:20:03.330 }, 00:20:03.330 "memory_domains": [ 00:20:03.330 { 00:20:03.330 "dma_device_id": "system", 00:20:03.330 "dma_device_type": 1 00:20:03.330 }, 00:20:03.330 { 00:20:03.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.330 "dma_device_type": 2 00:20:03.330 } 00:20:03.330 ], 00:20:03.330 "driver_specific": {} 00:20:03.330 } 00:20:03.330 ] 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:03.330 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:03.331 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:03.331 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:03.331 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.331 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:03.331 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.331 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.331 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.331 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.331 "name": "Existed_Raid", 00:20:03.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.331 "strip_size_kb": 64, 00:20:03.331 "state": "configuring", 00:20:03.331 "raid_level": "raid5f", 00:20:03.331 "superblock": false, 00:20:03.331 "num_base_bdevs": 3, 00:20:03.331 "num_base_bdevs_discovered": 2, 00:20:03.331 "num_base_bdevs_operational": 3, 00:20:03.331 "base_bdevs_list": [ 00:20:03.331 { 00:20:03.331 "name": "BaseBdev1", 00:20:03.331 "uuid": "1c09d919-ee1e-4d11-9c48-4c14623c783d", 00:20:03.331 "is_configured": true, 00:20:03.331 "data_offset": 0, 00:20:03.331 "data_size": 65536 00:20:03.331 }, 00:20:03.331 { 00:20:03.331 "name": null, 00:20:03.331 "uuid": "750231c9-750a-4c0b-a37c-085d43ddea38", 00:20:03.331 "is_configured": false, 00:20:03.331 "data_offset": 0, 00:20:03.331 "data_size": 65536 00:20:03.331 }, 00:20:03.331 { 00:20:03.331 "name": "BaseBdev3", 00:20:03.331 "uuid": "829a460d-7d3f-47a8-a7c7-d178cb0fb0fc", 00:20:03.331 "is_configured": true, 00:20:03.331 "data_offset": 0, 00:20:03.331 "data_size": 65536 00:20:03.331 } 00:20:03.331 ] 00:20:03.331 }' 00:20:03.331 06:46:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.331 06:46:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.898 06:46:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:03.898 06:46:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.898 06:46:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.898 06:46:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.898 06:46:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.898 06:46:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:03.898 06:46:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:03.898 06:46:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.898 06:46:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.898 [2024-12-06 06:46:22.437833] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:03.898 06:46:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.898 06:46:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:03.898 06:46:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:03.898 06:46:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:03.898 06:46:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:03.898 06:46:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:03.898 06:46:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:03.898 06:46:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:03.898 06:46:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:03.898 06:46:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:03.898 06:46:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:03.898 06:46:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.899 06:46:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.899 06:46:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.899 06:46:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:03.899 06:46:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.899 06:46:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.899 "name": "Existed_Raid", 00:20:03.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.899 "strip_size_kb": 64, 00:20:03.899 "state": "configuring", 00:20:03.899 "raid_level": "raid5f", 00:20:03.899 "superblock": false, 00:20:03.899 "num_base_bdevs": 3, 00:20:03.899 "num_base_bdevs_discovered": 1, 00:20:03.899 "num_base_bdevs_operational": 3, 00:20:03.899 "base_bdevs_list": [ 00:20:03.899 { 00:20:03.899 "name": "BaseBdev1", 00:20:03.899 "uuid": "1c09d919-ee1e-4d11-9c48-4c14623c783d", 00:20:03.899 "is_configured": true, 00:20:03.899 "data_offset": 0, 00:20:03.899 "data_size": 65536 00:20:03.899 }, 00:20:03.899 { 00:20:03.899 "name": null, 00:20:03.899 "uuid": "750231c9-750a-4c0b-a37c-085d43ddea38", 00:20:03.899 "is_configured": false, 00:20:03.899 "data_offset": 0, 00:20:03.899 "data_size": 65536 00:20:03.899 }, 00:20:03.899 { 00:20:03.899 "name": null, 00:20:03.899 "uuid": "829a460d-7d3f-47a8-a7c7-d178cb0fb0fc", 00:20:03.899 "is_configured": false, 00:20:03.899 "data_offset": 0, 00:20:03.899 "data_size": 65536 00:20:03.899 } 00:20:03.899 ] 00:20:03.899 }' 00:20:03.899 06:46:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.899 06:46:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.466 06:46:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:04.466 06:46:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.466 06:46:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.466 06:46:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.466 06:46:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.466 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:04.466 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:04.466 06:46:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.466 06:46:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.466 [2024-12-06 06:46:23.030004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:04.466 06:46:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.466 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:04.466 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:04.466 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:04.466 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:04.466 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:04.466 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:04.466 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.466 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.466 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.466 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.466 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.466 06:46:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.466 06:46:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.466 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:04.466 06:46:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.466 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.466 "name": "Existed_Raid", 00:20:04.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.466 "strip_size_kb": 64, 00:20:04.466 "state": "configuring", 00:20:04.466 "raid_level": "raid5f", 00:20:04.466 "superblock": false, 00:20:04.466 "num_base_bdevs": 3, 00:20:04.466 "num_base_bdevs_discovered": 2, 00:20:04.466 "num_base_bdevs_operational": 3, 00:20:04.466 "base_bdevs_list": [ 00:20:04.466 { 00:20:04.466 "name": "BaseBdev1", 00:20:04.466 "uuid": "1c09d919-ee1e-4d11-9c48-4c14623c783d", 00:20:04.466 "is_configured": true, 00:20:04.466 "data_offset": 0, 00:20:04.466 "data_size": 65536 00:20:04.466 }, 00:20:04.466 { 00:20:04.466 "name": null, 00:20:04.466 "uuid": "750231c9-750a-4c0b-a37c-085d43ddea38", 00:20:04.466 "is_configured": false, 00:20:04.466 "data_offset": 0, 00:20:04.466 "data_size": 65536 00:20:04.466 }, 00:20:04.466 { 00:20:04.466 "name": "BaseBdev3", 00:20:04.466 "uuid": "829a460d-7d3f-47a8-a7c7-d178cb0fb0fc", 00:20:04.466 "is_configured": true, 00:20:04.466 "data_offset": 0, 00:20:04.466 "data_size": 65536 00:20:04.466 } 00:20:04.466 ] 00:20:04.466 }' 00:20:04.466 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.466 06:46:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.032 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.032 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:05.032 06:46:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.032 06:46:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.032 06:46:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.032 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:05.032 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:05.032 06:46:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.032 06:46:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.032 [2024-12-06 06:46:23.630211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:05.292 06:46:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.292 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:05.292 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:05.292 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:05.292 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:05.292 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:05.292 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:05.292 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.292 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.292 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.292 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.292 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.292 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:05.292 06:46:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.292 06:46:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.292 06:46:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.292 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.292 "name": "Existed_Raid", 00:20:05.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.292 "strip_size_kb": 64, 00:20:05.292 "state": "configuring", 00:20:05.292 "raid_level": "raid5f", 00:20:05.292 "superblock": false, 00:20:05.292 "num_base_bdevs": 3, 00:20:05.292 "num_base_bdevs_discovered": 1, 00:20:05.292 "num_base_bdevs_operational": 3, 00:20:05.292 "base_bdevs_list": [ 00:20:05.292 { 00:20:05.292 "name": null, 00:20:05.292 "uuid": "1c09d919-ee1e-4d11-9c48-4c14623c783d", 00:20:05.292 "is_configured": false, 00:20:05.292 "data_offset": 0, 00:20:05.292 "data_size": 65536 00:20:05.292 }, 00:20:05.292 { 00:20:05.292 "name": null, 00:20:05.292 "uuid": "750231c9-750a-4c0b-a37c-085d43ddea38", 00:20:05.292 "is_configured": false, 00:20:05.292 "data_offset": 0, 00:20:05.292 "data_size": 65536 00:20:05.292 }, 00:20:05.292 { 00:20:05.292 "name": "BaseBdev3", 00:20:05.292 "uuid": "829a460d-7d3f-47a8-a7c7-d178cb0fb0fc", 00:20:05.292 "is_configured": true, 00:20:05.292 "data_offset": 0, 00:20:05.292 "data_size": 65536 00:20:05.292 } 00:20:05.292 ] 00:20:05.292 }' 00:20:05.292 06:46:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.293 06:46:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.861 06:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.861 06:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:05.861 06:46:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.861 06:46:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.861 06:46:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.861 06:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:05.861 06:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:05.861 06:46:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.861 06:46:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.861 [2024-12-06 06:46:24.300283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:05.861 06:46:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.861 06:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:05.861 06:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:05.861 06:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:05.861 06:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:05.861 06:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:05.861 06:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:05.861 06:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.861 06:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.861 06:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.861 06:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.861 06:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.861 06:46:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.861 06:46:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.861 06:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:05.861 06:46:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.861 06:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.861 "name": "Existed_Raid", 00:20:05.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.861 "strip_size_kb": 64, 00:20:05.861 "state": "configuring", 00:20:05.861 "raid_level": "raid5f", 00:20:05.861 "superblock": false, 00:20:05.861 "num_base_bdevs": 3, 00:20:05.861 "num_base_bdevs_discovered": 2, 00:20:05.861 "num_base_bdevs_operational": 3, 00:20:05.861 "base_bdevs_list": [ 00:20:05.861 { 00:20:05.861 "name": null, 00:20:05.861 "uuid": "1c09d919-ee1e-4d11-9c48-4c14623c783d", 00:20:05.861 "is_configured": false, 00:20:05.861 "data_offset": 0, 00:20:05.861 "data_size": 65536 00:20:05.861 }, 00:20:05.861 { 00:20:05.861 "name": "BaseBdev2", 00:20:05.861 "uuid": "750231c9-750a-4c0b-a37c-085d43ddea38", 00:20:05.861 "is_configured": true, 00:20:05.861 "data_offset": 0, 00:20:05.861 "data_size": 65536 00:20:05.861 }, 00:20:05.861 { 00:20:05.861 "name": "BaseBdev3", 00:20:05.861 "uuid": "829a460d-7d3f-47a8-a7c7-d178cb0fb0fc", 00:20:05.861 "is_configured": true, 00:20:05.861 "data_offset": 0, 00:20:05.861 "data_size": 65536 00:20:05.861 } 00:20:05.861 ] 00:20:05.861 }' 00:20:05.861 06:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.861 06:46:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.428 06:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.428 06:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:06.428 06:46:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.428 06:46:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.428 06:46:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.428 06:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:06.428 06:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.428 06:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:06.428 06:46:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.428 06:46:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.428 06:46:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.428 06:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1c09d919-ee1e-4d11-9c48-4c14623c783d 00:20:06.428 06:46:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.428 06:46:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.428 [2024-12-06 06:46:25.003094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:06.428 [2024-12-06 06:46:25.003165] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:06.428 [2024-12-06 06:46:25.003183] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:20:06.428 [2024-12-06 06:46:25.003502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:06.429 [2024-12-06 06:46:25.008543] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:06.429 [2024-12-06 06:46:25.008574] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:06.429 [2024-12-06 06:46:25.008904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:06.429 NewBaseBdev 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.429 [ 00:20:06.429 { 00:20:06.429 "name": "NewBaseBdev", 00:20:06.429 "aliases": [ 00:20:06.429 "1c09d919-ee1e-4d11-9c48-4c14623c783d" 00:20:06.429 ], 00:20:06.429 "product_name": "Malloc disk", 00:20:06.429 "block_size": 512, 00:20:06.429 "num_blocks": 65536, 00:20:06.429 "uuid": "1c09d919-ee1e-4d11-9c48-4c14623c783d", 00:20:06.429 "assigned_rate_limits": { 00:20:06.429 "rw_ios_per_sec": 0, 00:20:06.429 "rw_mbytes_per_sec": 0, 00:20:06.429 "r_mbytes_per_sec": 0, 00:20:06.429 "w_mbytes_per_sec": 0 00:20:06.429 }, 00:20:06.429 "claimed": true, 00:20:06.429 "claim_type": "exclusive_write", 00:20:06.429 "zoned": false, 00:20:06.429 "supported_io_types": { 00:20:06.429 "read": true, 00:20:06.429 "write": true, 00:20:06.429 "unmap": true, 00:20:06.429 "flush": true, 00:20:06.429 "reset": true, 00:20:06.429 "nvme_admin": false, 00:20:06.429 "nvme_io": false, 00:20:06.429 "nvme_io_md": false, 00:20:06.429 "write_zeroes": true, 00:20:06.429 "zcopy": true, 00:20:06.429 "get_zone_info": false, 00:20:06.429 "zone_management": false, 00:20:06.429 "zone_append": false, 00:20:06.429 "compare": false, 00:20:06.429 "compare_and_write": false, 00:20:06.429 "abort": true, 00:20:06.429 "seek_hole": false, 00:20:06.429 "seek_data": false, 00:20:06.429 "copy": true, 00:20:06.429 "nvme_iov_md": false 00:20:06.429 }, 00:20:06.429 "memory_domains": [ 00:20:06.429 { 00:20:06.429 "dma_device_id": "system", 00:20:06.429 "dma_device_type": 1 00:20:06.429 }, 00:20:06.429 { 00:20:06.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.429 "dma_device_type": 2 00:20:06.429 } 00:20:06.429 ], 00:20:06.429 "driver_specific": {} 00:20:06.429 } 00:20:06.429 ] 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.429 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.687 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:06.687 "name": "Existed_Raid", 00:20:06.687 "uuid": "977e1b5f-3da8-40c2-ab2c-0064f1558853", 00:20:06.687 "strip_size_kb": 64, 00:20:06.687 "state": "online", 00:20:06.687 "raid_level": "raid5f", 00:20:06.687 "superblock": false, 00:20:06.687 "num_base_bdevs": 3, 00:20:06.687 "num_base_bdevs_discovered": 3, 00:20:06.687 "num_base_bdevs_operational": 3, 00:20:06.687 "base_bdevs_list": [ 00:20:06.687 { 00:20:06.687 "name": "NewBaseBdev", 00:20:06.687 "uuid": "1c09d919-ee1e-4d11-9c48-4c14623c783d", 00:20:06.687 "is_configured": true, 00:20:06.687 "data_offset": 0, 00:20:06.687 "data_size": 65536 00:20:06.687 }, 00:20:06.687 { 00:20:06.687 "name": "BaseBdev2", 00:20:06.687 "uuid": "750231c9-750a-4c0b-a37c-085d43ddea38", 00:20:06.687 "is_configured": true, 00:20:06.687 "data_offset": 0, 00:20:06.687 "data_size": 65536 00:20:06.687 }, 00:20:06.687 { 00:20:06.687 "name": "BaseBdev3", 00:20:06.687 "uuid": "829a460d-7d3f-47a8-a7c7-d178cb0fb0fc", 00:20:06.687 "is_configured": true, 00:20:06.687 "data_offset": 0, 00:20:06.687 "data_size": 65536 00:20:06.687 } 00:20:06.687 ] 00:20:06.687 }' 00:20:06.687 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:06.687 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.944 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:06.944 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:06.944 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:06.944 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:06.944 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:06.945 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:06.945 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:06.945 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:06.945 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.945 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.945 [2024-12-06 06:46:25.558963] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:06.945 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.204 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:07.204 "name": "Existed_Raid", 00:20:07.204 "aliases": [ 00:20:07.204 "977e1b5f-3da8-40c2-ab2c-0064f1558853" 00:20:07.204 ], 00:20:07.204 "product_name": "Raid Volume", 00:20:07.204 "block_size": 512, 00:20:07.204 "num_blocks": 131072, 00:20:07.204 "uuid": "977e1b5f-3da8-40c2-ab2c-0064f1558853", 00:20:07.204 "assigned_rate_limits": { 00:20:07.204 "rw_ios_per_sec": 0, 00:20:07.204 "rw_mbytes_per_sec": 0, 00:20:07.204 "r_mbytes_per_sec": 0, 00:20:07.204 "w_mbytes_per_sec": 0 00:20:07.204 }, 00:20:07.204 "claimed": false, 00:20:07.204 "zoned": false, 00:20:07.204 "supported_io_types": { 00:20:07.204 "read": true, 00:20:07.204 "write": true, 00:20:07.204 "unmap": false, 00:20:07.204 "flush": false, 00:20:07.204 "reset": true, 00:20:07.204 "nvme_admin": false, 00:20:07.204 "nvme_io": false, 00:20:07.204 "nvme_io_md": false, 00:20:07.204 "write_zeroes": true, 00:20:07.204 "zcopy": false, 00:20:07.204 "get_zone_info": false, 00:20:07.204 "zone_management": false, 00:20:07.204 "zone_append": false, 00:20:07.204 "compare": false, 00:20:07.204 "compare_and_write": false, 00:20:07.204 "abort": false, 00:20:07.204 "seek_hole": false, 00:20:07.204 "seek_data": false, 00:20:07.204 "copy": false, 00:20:07.204 "nvme_iov_md": false 00:20:07.204 }, 00:20:07.204 "driver_specific": { 00:20:07.204 "raid": { 00:20:07.204 "uuid": "977e1b5f-3da8-40c2-ab2c-0064f1558853", 00:20:07.204 "strip_size_kb": 64, 00:20:07.204 "state": "online", 00:20:07.204 "raid_level": "raid5f", 00:20:07.204 "superblock": false, 00:20:07.204 "num_base_bdevs": 3, 00:20:07.204 "num_base_bdevs_discovered": 3, 00:20:07.204 "num_base_bdevs_operational": 3, 00:20:07.204 "base_bdevs_list": [ 00:20:07.204 { 00:20:07.204 "name": "NewBaseBdev", 00:20:07.204 "uuid": "1c09d919-ee1e-4d11-9c48-4c14623c783d", 00:20:07.204 "is_configured": true, 00:20:07.204 "data_offset": 0, 00:20:07.204 "data_size": 65536 00:20:07.204 }, 00:20:07.204 { 00:20:07.204 "name": "BaseBdev2", 00:20:07.204 "uuid": "750231c9-750a-4c0b-a37c-085d43ddea38", 00:20:07.204 "is_configured": true, 00:20:07.204 "data_offset": 0, 00:20:07.204 "data_size": 65536 00:20:07.204 }, 00:20:07.204 { 00:20:07.204 "name": "BaseBdev3", 00:20:07.204 "uuid": "829a460d-7d3f-47a8-a7c7-d178cb0fb0fc", 00:20:07.204 "is_configured": true, 00:20:07.204 "data_offset": 0, 00:20:07.204 "data_size": 65536 00:20:07.204 } 00:20:07.204 ] 00:20:07.204 } 00:20:07.204 } 00:20:07.204 }' 00:20:07.204 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:07.204 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:07.204 BaseBdev2 00:20:07.204 BaseBdev3' 00:20:07.204 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:07.204 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:07.204 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:07.204 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:07.204 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.204 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.204 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:07.204 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.204 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:07.204 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:07.204 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:07.204 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:07.204 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.204 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.204 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:07.204 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.204 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:07.204 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:07.204 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:07.204 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:07.204 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:07.204 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.204 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.463 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.463 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:07.463 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:07.463 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:07.463 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.463 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.463 [2024-12-06 06:46:25.890840] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:07.463 [2024-12-06 06:46:25.890877] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:07.463 [2024-12-06 06:46:25.890982] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:07.463 [2024-12-06 06:46:25.891357] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:07.463 [2024-12-06 06:46:25.891392] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:07.463 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.463 06:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80402 00:20:07.463 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80402 ']' 00:20:07.463 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80402 00:20:07.463 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:20:07.463 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:07.463 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80402 00:20:07.463 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:07.463 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:07.463 killing process with pid 80402 00:20:07.463 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80402' 00:20:07.463 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80402 00:20:07.463 [2024-12-06 06:46:25.927489] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:07.463 06:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80402 00:20:07.720 [2024-12-06 06:46:26.198167] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:08.712 06:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:20:08.712 00:20:08.712 real 0m11.902s 00:20:08.712 user 0m19.709s 00:20:08.712 sys 0m1.694s 00:20:08.712 06:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:08.712 06:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.712 ************************************ 00:20:08.712 END TEST raid5f_state_function_test 00:20:08.712 ************************************ 00:20:08.712 06:46:27 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:20:08.712 06:46:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:08.712 06:46:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:08.712 06:46:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:08.712 ************************************ 00:20:08.712 START TEST raid5f_state_function_test_sb 00:20:08.712 ************************************ 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81041 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81041' 00:20:08.713 Process raid pid: 81041 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81041 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81041 ']' 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.713 06:46:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.970 [2024-12-06 06:46:27.432676] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:20:08.970 [2024-12-06 06:46:27.433405] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.227 [2024-12-06 06:46:27.617739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.227 [2024-12-06 06:46:27.775979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.483 [2024-12-06 06:46:28.015050] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:09.483 [2024-12-06 06:46:28.015103] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:10.048 06:46:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:10.048 06:46:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:20:10.048 06:46:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:10.048 06:46:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.048 06:46:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.048 [2024-12-06 06:46:28.454776] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:10.048 [2024-12-06 06:46:28.454872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:10.048 [2024-12-06 06:46:28.454891] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:10.048 [2024-12-06 06:46:28.454909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:10.048 [2024-12-06 06:46:28.454919] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:10.048 [2024-12-06 06:46:28.454934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:10.048 06:46:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.048 06:46:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:10.048 06:46:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:10.048 06:46:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:10.048 06:46:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:10.048 06:46:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:10.048 06:46:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:10.048 06:46:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.048 06:46:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.048 06:46:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.048 06:46:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.048 06:46:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.048 06:46:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.048 06:46:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.048 06:46:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.048 06:46:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.048 06:46:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.048 "name": "Existed_Raid", 00:20:10.048 "uuid": "e0312002-2c23-4dc7-bbba-f6b040b8c3b4", 00:20:10.048 "strip_size_kb": 64, 00:20:10.048 "state": "configuring", 00:20:10.048 "raid_level": "raid5f", 00:20:10.048 "superblock": true, 00:20:10.048 "num_base_bdevs": 3, 00:20:10.048 "num_base_bdevs_discovered": 0, 00:20:10.048 "num_base_bdevs_operational": 3, 00:20:10.048 "base_bdevs_list": [ 00:20:10.048 { 00:20:10.048 "name": "BaseBdev1", 00:20:10.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.048 "is_configured": false, 00:20:10.048 "data_offset": 0, 00:20:10.048 "data_size": 0 00:20:10.048 }, 00:20:10.048 { 00:20:10.048 "name": "BaseBdev2", 00:20:10.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.048 "is_configured": false, 00:20:10.048 "data_offset": 0, 00:20:10.048 "data_size": 0 00:20:10.048 }, 00:20:10.048 { 00:20:10.048 "name": "BaseBdev3", 00:20:10.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.048 "is_configured": false, 00:20:10.048 "data_offset": 0, 00:20:10.048 "data_size": 0 00:20:10.048 } 00:20:10.048 ] 00:20:10.048 }' 00:20:10.048 06:46:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.048 06:46:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.310 06:46:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:10.310 06:46:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.310 06:46:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.310 [2024-12-06 06:46:28.938843] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:10.310 [2024-12-06 06:46:28.938891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:10.310 06:46:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.310 06:46:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:10.310 06:46:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.310 06:46:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.310 [2024-12-06 06:46:28.946815] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:10.310 [2024-12-06 06:46:28.946885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:10.310 [2024-12-06 06:46:28.946903] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:10.310 [2024-12-06 06:46:28.946920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:10.310 [2024-12-06 06:46:28.946930] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:10.310 [2024-12-06 06:46:28.946943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:10.310 06:46:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.310 06:46:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:10.310 06:46:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.310 06:46:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.582 [2024-12-06 06:46:28.991634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:10.582 BaseBdev1 00:20:10.582 06:46:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.582 06:46:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:10.582 06:46:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:10.582 06:46:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:10.582 06:46:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:10.582 06:46:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:10.582 06:46:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:10.582 06:46:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:10.582 06:46:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.582 06:46:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.582 06:46:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.582 06:46:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:10.582 06:46:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.582 06:46:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.582 [ 00:20:10.582 { 00:20:10.582 "name": "BaseBdev1", 00:20:10.582 "aliases": [ 00:20:10.582 "ce568bb6-d1cc-4f33-bc97-14cc04ed215d" 00:20:10.582 ], 00:20:10.582 "product_name": "Malloc disk", 00:20:10.582 "block_size": 512, 00:20:10.582 "num_blocks": 65536, 00:20:10.582 "uuid": "ce568bb6-d1cc-4f33-bc97-14cc04ed215d", 00:20:10.582 "assigned_rate_limits": { 00:20:10.582 "rw_ios_per_sec": 0, 00:20:10.582 "rw_mbytes_per_sec": 0, 00:20:10.582 "r_mbytes_per_sec": 0, 00:20:10.582 "w_mbytes_per_sec": 0 00:20:10.582 }, 00:20:10.582 "claimed": true, 00:20:10.582 "claim_type": "exclusive_write", 00:20:10.582 "zoned": false, 00:20:10.582 "supported_io_types": { 00:20:10.582 "read": true, 00:20:10.582 "write": true, 00:20:10.582 "unmap": true, 00:20:10.582 "flush": true, 00:20:10.582 "reset": true, 00:20:10.582 "nvme_admin": false, 00:20:10.582 "nvme_io": false, 00:20:10.582 "nvme_io_md": false, 00:20:10.583 "write_zeroes": true, 00:20:10.583 "zcopy": true, 00:20:10.583 "get_zone_info": false, 00:20:10.583 "zone_management": false, 00:20:10.583 "zone_append": false, 00:20:10.583 "compare": false, 00:20:10.583 "compare_and_write": false, 00:20:10.583 "abort": true, 00:20:10.583 "seek_hole": false, 00:20:10.583 "seek_data": false, 00:20:10.583 "copy": true, 00:20:10.583 "nvme_iov_md": false 00:20:10.583 }, 00:20:10.583 "memory_domains": [ 00:20:10.583 { 00:20:10.583 "dma_device_id": "system", 00:20:10.583 "dma_device_type": 1 00:20:10.583 }, 00:20:10.583 { 00:20:10.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.583 "dma_device_type": 2 00:20:10.583 } 00:20:10.583 ], 00:20:10.583 "driver_specific": {} 00:20:10.583 } 00:20:10.583 ] 00:20:10.583 06:46:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.583 06:46:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:10.583 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:10.583 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:10.583 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:10.583 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:10.583 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:10.583 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:10.583 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.583 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.583 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.583 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.583 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.583 06:46:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.583 06:46:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.583 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.583 06:46:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.583 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.583 "name": "Existed_Raid", 00:20:10.583 "uuid": "d5c9571f-62f1-45ae-943e-e3df805474de", 00:20:10.583 "strip_size_kb": 64, 00:20:10.583 "state": "configuring", 00:20:10.583 "raid_level": "raid5f", 00:20:10.583 "superblock": true, 00:20:10.583 "num_base_bdevs": 3, 00:20:10.583 "num_base_bdevs_discovered": 1, 00:20:10.583 "num_base_bdevs_operational": 3, 00:20:10.583 "base_bdevs_list": [ 00:20:10.583 { 00:20:10.583 "name": "BaseBdev1", 00:20:10.583 "uuid": "ce568bb6-d1cc-4f33-bc97-14cc04ed215d", 00:20:10.583 "is_configured": true, 00:20:10.583 "data_offset": 2048, 00:20:10.583 "data_size": 63488 00:20:10.583 }, 00:20:10.583 { 00:20:10.583 "name": "BaseBdev2", 00:20:10.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.583 "is_configured": false, 00:20:10.583 "data_offset": 0, 00:20:10.583 "data_size": 0 00:20:10.583 }, 00:20:10.583 { 00:20:10.583 "name": "BaseBdev3", 00:20:10.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.583 "is_configured": false, 00:20:10.583 "data_offset": 0, 00:20:10.583 "data_size": 0 00:20:10.583 } 00:20:10.583 ] 00:20:10.583 }' 00:20:10.583 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.583 06:46:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.150 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:11.150 06:46:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.150 06:46:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.150 [2024-12-06 06:46:29.507850] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:11.150 [2024-12-06 06:46:29.507918] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:11.150 06:46:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.150 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:11.150 06:46:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.150 06:46:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.150 [2024-12-06 06:46:29.515908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:11.150 [2024-12-06 06:46:29.518312] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:11.150 [2024-12-06 06:46:29.518367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:11.150 [2024-12-06 06:46:29.518383] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:11.150 [2024-12-06 06:46:29.518398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:11.150 06:46:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.150 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:11.150 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:11.150 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:11.150 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:11.150 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:11.150 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:11.150 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:11.150 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:11.150 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.150 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.150 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.150 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.150 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.150 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.150 06:46:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.150 06:46:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.150 06:46:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.150 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.150 "name": "Existed_Raid", 00:20:11.150 "uuid": "d23861ca-bff1-4787-9258-7ce423261258", 00:20:11.150 "strip_size_kb": 64, 00:20:11.150 "state": "configuring", 00:20:11.150 "raid_level": "raid5f", 00:20:11.150 "superblock": true, 00:20:11.150 "num_base_bdevs": 3, 00:20:11.150 "num_base_bdevs_discovered": 1, 00:20:11.150 "num_base_bdevs_operational": 3, 00:20:11.150 "base_bdevs_list": [ 00:20:11.150 { 00:20:11.150 "name": "BaseBdev1", 00:20:11.150 "uuid": "ce568bb6-d1cc-4f33-bc97-14cc04ed215d", 00:20:11.150 "is_configured": true, 00:20:11.150 "data_offset": 2048, 00:20:11.150 "data_size": 63488 00:20:11.151 }, 00:20:11.151 { 00:20:11.151 "name": "BaseBdev2", 00:20:11.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.151 "is_configured": false, 00:20:11.151 "data_offset": 0, 00:20:11.151 "data_size": 0 00:20:11.151 }, 00:20:11.151 { 00:20:11.151 "name": "BaseBdev3", 00:20:11.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.151 "is_configured": false, 00:20:11.151 "data_offset": 0, 00:20:11.151 "data_size": 0 00:20:11.151 } 00:20:11.151 ] 00:20:11.151 }' 00:20:11.151 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.151 06:46:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.409 06:46:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:11.409 06:46:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.409 06:46:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.409 [2024-12-06 06:46:30.006122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:11.409 BaseBdev2 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.409 [ 00:20:11.409 { 00:20:11.409 "name": "BaseBdev2", 00:20:11.409 "aliases": [ 00:20:11.409 "683fe439-4793-49b5-8b6f-3e4134bfed2f" 00:20:11.409 ], 00:20:11.409 "product_name": "Malloc disk", 00:20:11.409 "block_size": 512, 00:20:11.409 "num_blocks": 65536, 00:20:11.409 "uuid": "683fe439-4793-49b5-8b6f-3e4134bfed2f", 00:20:11.409 "assigned_rate_limits": { 00:20:11.409 "rw_ios_per_sec": 0, 00:20:11.409 "rw_mbytes_per_sec": 0, 00:20:11.409 "r_mbytes_per_sec": 0, 00:20:11.409 "w_mbytes_per_sec": 0 00:20:11.409 }, 00:20:11.409 "claimed": true, 00:20:11.409 "claim_type": "exclusive_write", 00:20:11.409 "zoned": false, 00:20:11.409 "supported_io_types": { 00:20:11.409 "read": true, 00:20:11.409 "write": true, 00:20:11.409 "unmap": true, 00:20:11.409 "flush": true, 00:20:11.409 "reset": true, 00:20:11.409 "nvme_admin": false, 00:20:11.409 "nvme_io": false, 00:20:11.409 "nvme_io_md": false, 00:20:11.409 "write_zeroes": true, 00:20:11.409 "zcopy": true, 00:20:11.409 "get_zone_info": false, 00:20:11.409 "zone_management": false, 00:20:11.409 "zone_append": false, 00:20:11.409 "compare": false, 00:20:11.409 "compare_and_write": false, 00:20:11.409 "abort": true, 00:20:11.409 "seek_hole": false, 00:20:11.409 "seek_data": false, 00:20:11.409 "copy": true, 00:20:11.409 "nvme_iov_md": false 00:20:11.409 }, 00:20:11.409 "memory_domains": [ 00:20:11.409 { 00:20:11.409 "dma_device_id": "system", 00:20:11.409 "dma_device_type": 1 00:20:11.409 }, 00:20:11.409 { 00:20:11.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.409 "dma_device_type": 2 00:20:11.409 } 00:20:11.409 ], 00:20:11.409 "driver_specific": {} 00:20:11.409 } 00:20:11.409 ] 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.409 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.668 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.668 "name": "Existed_Raid", 00:20:11.668 "uuid": "d23861ca-bff1-4787-9258-7ce423261258", 00:20:11.668 "strip_size_kb": 64, 00:20:11.668 "state": "configuring", 00:20:11.668 "raid_level": "raid5f", 00:20:11.668 "superblock": true, 00:20:11.668 "num_base_bdevs": 3, 00:20:11.668 "num_base_bdevs_discovered": 2, 00:20:11.668 "num_base_bdevs_operational": 3, 00:20:11.668 "base_bdevs_list": [ 00:20:11.668 { 00:20:11.668 "name": "BaseBdev1", 00:20:11.668 "uuid": "ce568bb6-d1cc-4f33-bc97-14cc04ed215d", 00:20:11.668 "is_configured": true, 00:20:11.668 "data_offset": 2048, 00:20:11.668 "data_size": 63488 00:20:11.668 }, 00:20:11.668 { 00:20:11.668 "name": "BaseBdev2", 00:20:11.668 "uuid": "683fe439-4793-49b5-8b6f-3e4134bfed2f", 00:20:11.668 "is_configured": true, 00:20:11.668 "data_offset": 2048, 00:20:11.668 "data_size": 63488 00:20:11.668 }, 00:20:11.668 { 00:20:11.668 "name": "BaseBdev3", 00:20:11.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.668 "is_configured": false, 00:20:11.668 "data_offset": 0, 00:20:11.668 "data_size": 0 00:20:11.668 } 00:20:11.668 ] 00:20:11.668 }' 00:20:11.668 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.668 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.926 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:11.926 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.926 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.184 [2024-12-06 06:46:30.572656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:12.184 [2024-12-06 06:46:30.572993] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:12.184 [2024-12-06 06:46:30.573030] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:12.184 BaseBdev3 00:20:12.184 [2024-12-06 06:46:30.573414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:12.184 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.184 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:12.184 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:12.184 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:12.184 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:12.184 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:12.184 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:12.184 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:12.184 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.184 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.184 [2024-12-06 06:46:30.579262] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:12.184 [2024-12-06 06:46:30.579294] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:12.184 [2024-12-06 06:46:30.579641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:12.184 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.184 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:12.184 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.184 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.184 [ 00:20:12.184 { 00:20:12.184 "name": "BaseBdev3", 00:20:12.184 "aliases": [ 00:20:12.184 "dd822520-ab2f-438a-bc51-405a2eeea558" 00:20:12.184 ], 00:20:12.184 "product_name": "Malloc disk", 00:20:12.184 "block_size": 512, 00:20:12.184 "num_blocks": 65536, 00:20:12.184 "uuid": "dd822520-ab2f-438a-bc51-405a2eeea558", 00:20:12.184 "assigned_rate_limits": { 00:20:12.184 "rw_ios_per_sec": 0, 00:20:12.184 "rw_mbytes_per_sec": 0, 00:20:12.184 "r_mbytes_per_sec": 0, 00:20:12.184 "w_mbytes_per_sec": 0 00:20:12.184 }, 00:20:12.184 "claimed": true, 00:20:12.184 "claim_type": "exclusive_write", 00:20:12.184 "zoned": false, 00:20:12.184 "supported_io_types": { 00:20:12.184 "read": true, 00:20:12.184 "write": true, 00:20:12.184 "unmap": true, 00:20:12.184 "flush": true, 00:20:12.184 "reset": true, 00:20:12.184 "nvme_admin": false, 00:20:12.184 "nvme_io": false, 00:20:12.184 "nvme_io_md": false, 00:20:12.184 "write_zeroes": true, 00:20:12.184 "zcopy": true, 00:20:12.184 "get_zone_info": false, 00:20:12.184 "zone_management": false, 00:20:12.184 "zone_append": false, 00:20:12.184 "compare": false, 00:20:12.184 "compare_and_write": false, 00:20:12.184 "abort": true, 00:20:12.184 "seek_hole": false, 00:20:12.184 "seek_data": false, 00:20:12.184 "copy": true, 00:20:12.184 "nvme_iov_md": false 00:20:12.184 }, 00:20:12.184 "memory_domains": [ 00:20:12.184 { 00:20:12.184 "dma_device_id": "system", 00:20:12.184 "dma_device_type": 1 00:20:12.184 }, 00:20:12.184 { 00:20:12.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:12.185 "dma_device_type": 2 00:20:12.185 } 00:20:12.185 ], 00:20:12.185 "driver_specific": {} 00:20:12.185 } 00:20:12.185 ] 00:20:12.185 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.185 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:12.185 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:12.185 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:12.185 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:12.185 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:12.185 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:12.185 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:12.185 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:12.185 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:12.185 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:12.185 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:12.185 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:12.185 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:12.185 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.185 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.185 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.185 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.185 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.185 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.185 "name": "Existed_Raid", 00:20:12.185 "uuid": "d23861ca-bff1-4787-9258-7ce423261258", 00:20:12.185 "strip_size_kb": 64, 00:20:12.185 "state": "online", 00:20:12.185 "raid_level": "raid5f", 00:20:12.185 "superblock": true, 00:20:12.185 "num_base_bdevs": 3, 00:20:12.185 "num_base_bdevs_discovered": 3, 00:20:12.185 "num_base_bdevs_operational": 3, 00:20:12.185 "base_bdevs_list": [ 00:20:12.185 { 00:20:12.185 "name": "BaseBdev1", 00:20:12.185 "uuid": "ce568bb6-d1cc-4f33-bc97-14cc04ed215d", 00:20:12.185 "is_configured": true, 00:20:12.185 "data_offset": 2048, 00:20:12.185 "data_size": 63488 00:20:12.185 }, 00:20:12.185 { 00:20:12.185 "name": "BaseBdev2", 00:20:12.185 "uuid": "683fe439-4793-49b5-8b6f-3e4134bfed2f", 00:20:12.185 "is_configured": true, 00:20:12.185 "data_offset": 2048, 00:20:12.185 "data_size": 63488 00:20:12.185 }, 00:20:12.185 { 00:20:12.185 "name": "BaseBdev3", 00:20:12.185 "uuid": "dd822520-ab2f-438a-bc51-405a2eeea558", 00:20:12.185 "is_configured": true, 00:20:12.185 "data_offset": 2048, 00:20:12.185 "data_size": 63488 00:20:12.185 } 00:20:12.185 ] 00:20:12.185 }' 00:20:12.185 06:46:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.185 06:46:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.443 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:12.443 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:12.443 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:12.443 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:12.443 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:12.444 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:12.444 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:12.444 06:46:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.444 06:46:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.444 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:12.444 [2024-12-06 06:46:31.069636] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:12.444 06:46:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.702 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:12.702 "name": "Existed_Raid", 00:20:12.702 "aliases": [ 00:20:12.702 "d23861ca-bff1-4787-9258-7ce423261258" 00:20:12.702 ], 00:20:12.702 "product_name": "Raid Volume", 00:20:12.702 "block_size": 512, 00:20:12.702 "num_blocks": 126976, 00:20:12.702 "uuid": "d23861ca-bff1-4787-9258-7ce423261258", 00:20:12.702 "assigned_rate_limits": { 00:20:12.702 "rw_ios_per_sec": 0, 00:20:12.702 "rw_mbytes_per_sec": 0, 00:20:12.702 "r_mbytes_per_sec": 0, 00:20:12.702 "w_mbytes_per_sec": 0 00:20:12.702 }, 00:20:12.702 "claimed": false, 00:20:12.702 "zoned": false, 00:20:12.702 "supported_io_types": { 00:20:12.702 "read": true, 00:20:12.702 "write": true, 00:20:12.702 "unmap": false, 00:20:12.702 "flush": false, 00:20:12.702 "reset": true, 00:20:12.702 "nvme_admin": false, 00:20:12.702 "nvme_io": false, 00:20:12.702 "nvme_io_md": false, 00:20:12.702 "write_zeroes": true, 00:20:12.702 "zcopy": false, 00:20:12.702 "get_zone_info": false, 00:20:12.702 "zone_management": false, 00:20:12.702 "zone_append": false, 00:20:12.702 "compare": false, 00:20:12.702 "compare_and_write": false, 00:20:12.702 "abort": false, 00:20:12.702 "seek_hole": false, 00:20:12.702 "seek_data": false, 00:20:12.702 "copy": false, 00:20:12.702 "nvme_iov_md": false 00:20:12.702 }, 00:20:12.702 "driver_specific": { 00:20:12.702 "raid": { 00:20:12.702 "uuid": "d23861ca-bff1-4787-9258-7ce423261258", 00:20:12.702 "strip_size_kb": 64, 00:20:12.702 "state": "online", 00:20:12.702 "raid_level": "raid5f", 00:20:12.702 "superblock": true, 00:20:12.702 "num_base_bdevs": 3, 00:20:12.702 "num_base_bdevs_discovered": 3, 00:20:12.702 "num_base_bdevs_operational": 3, 00:20:12.702 "base_bdevs_list": [ 00:20:12.702 { 00:20:12.702 "name": "BaseBdev1", 00:20:12.702 "uuid": "ce568bb6-d1cc-4f33-bc97-14cc04ed215d", 00:20:12.702 "is_configured": true, 00:20:12.702 "data_offset": 2048, 00:20:12.702 "data_size": 63488 00:20:12.702 }, 00:20:12.702 { 00:20:12.702 "name": "BaseBdev2", 00:20:12.702 "uuid": "683fe439-4793-49b5-8b6f-3e4134bfed2f", 00:20:12.702 "is_configured": true, 00:20:12.702 "data_offset": 2048, 00:20:12.702 "data_size": 63488 00:20:12.702 }, 00:20:12.702 { 00:20:12.702 "name": "BaseBdev3", 00:20:12.702 "uuid": "dd822520-ab2f-438a-bc51-405a2eeea558", 00:20:12.702 "is_configured": true, 00:20:12.702 "data_offset": 2048, 00:20:12.702 "data_size": 63488 00:20:12.702 } 00:20:12.702 ] 00:20:12.702 } 00:20:12.702 } 00:20:12.702 }' 00:20:12.702 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:12.702 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:12.702 BaseBdev2 00:20:12.702 BaseBdev3' 00:20:12.702 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.702 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:12.702 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:12.702 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:12.702 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.702 06:46:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.702 06:46:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.702 06:46:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.702 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:12.702 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:12.702 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:12.702 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:12.702 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.702 06:46:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.703 06:46:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.703 06:46:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.703 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:12.703 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:12.703 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:12.703 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.703 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:12.703 06:46:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.703 06:46:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.703 06:46:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.703 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:12.703 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:12.703 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:12.703 06:46:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.703 06:46:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.703 [2024-12-06 06:46:31.337458] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:12.962 06:46:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.962 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:12.962 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:20:12.962 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:12.962 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:20:12.962 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:12.962 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:20:12.962 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:12.962 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:12.962 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:12.962 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:12.962 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:12.962 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:12.962 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:12.962 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:12.962 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:12.962 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.962 06:46:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.962 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.962 06:46:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.962 06:46:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.962 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.962 "name": "Existed_Raid", 00:20:12.962 "uuid": "d23861ca-bff1-4787-9258-7ce423261258", 00:20:12.962 "strip_size_kb": 64, 00:20:12.962 "state": "online", 00:20:12.962 "raid_level": "raid5f", 00:20:12.962 "superblock": true, 00:20:12.962 "num_base_bdevs": 3, 00:20:12.962 "num_base_bdevs_discovered": 2, 00:20:12.962 "num_base_bdevs_operational": 2, 00:20:12.962 "base_bdevs_list": [ 00:20:12.962 { 00:20:12.962 "name": null, 00:20:12.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.962 "is_configured": false, 00:20:12.962 "data_offset": 0, 00:20:12.962 "data_size": 63488 00:20:12.962 }, 00:20:12.962 { 00:20:12.962 "name": "BaseBdev2", 00:20:12.962 "uuid": "683fe439-4793-49b5-8b6f-3e4134bfed2f", 00:20:12.962 "is_configured": true, 00:20:12.962 "data_offset": 2048, 00:20:12.962 "data_size": 63488 00:20:12.962 }, 00:20:12.962 { 00:20:12.962 "name": "BaseBdev3", 00:20:12.962 "uuid": "dd822520-ab2f-438a-bc51-405a2eeea558", 00:20:12.962 "is_configured": true, 00:20:12.962 "data_offset": 2048, 00:20:12.962 "data_size": 63488 00:20:12.962 } 00:20:12.962 ] 00:20:12.962 }' 00:20:12.962 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.962 06:46:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.530 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:13.530 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:13.530 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.530 06:46:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.530 06:46:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.530 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:13.530 06:46:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.530 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:13.530 06:46:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:13.530 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:13.530 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.530 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.530 [2024-12-06 06:46:32.006392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:13.530 [2024-12-06 06:46:32.006615] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:13.530 [2024-12-06 06:46:32.092940] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:13.530 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.530 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:13.530 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:13.530 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.530 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.530 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.530 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:13.530 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.530 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:13.530 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:13.530 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:13.530 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.530 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.530 [2024-12-06 06:46:32.144994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:13.530 [2024-12-06 06:46:32.145059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.790 BaseBdev2 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.790 [ 00:20:13.790 { 00:20:13.790 "name": "BaseBdev2", 00:20:13.790 "aliases": [ 00:20:13.790 "cfcfea05-658f-4438-9f06-619948571b94" 00:20:13.790 ], 00:20:13.790 "product_name": "Malloc disk", 00:20:13.790 "block_size": 512, 00:20:13.790 "num_blocks": 65536, 00:20:13.790 "uuid": "cfcfea05-658f-4438-9f06-619948571b94", 00:20:13.790 "assigned_rate_limits": { 00:20:13.790 "rw_ios_per_sec": 0, 00:20:13.790 "rw_mbytes_per_sec": 0, 00:20:13.790 "r_mbytes_per_sec": 0, 00:20:13.790 "w_mbytes_per_sec": 0 00:20:13.790 }, 00:20:13.790 "claimed": false, 00:20:13.790 "zoned": false, 00:20:13.790 "supported_io_types": { 00:20:13.790 "read": true, 00:20:13.790 "write": true, 00:20:13.790 "unmap": true, 00:20:13.790 "flush": true, 00:20:13.790 "reset": true, 00:20:13.790 "nvme_admin": false, 00:20:13.790 "nvme_io": false, 00:20:13.790 "nvme_io_md": false, 00:20:13.790 "write_zeroes": true, 00:20:13.790 "zcopy": true, 00:20:13.790 "get_zone_info": false, 00:20:13.790 "zone_management": false, 00:20:13.790 "zone_append": false, 00:20:13.790 "compare": false, 00:20:13.790 "compare_and_write": false, 00:20:13.790 "abort": true, 00:20:13.790 "seek_hole": false, 00:20:13.790 "seek_data": false, 00:20:13.790 "copy": true, 00:20:13.790 "nvme_iov_md": false 00:20:13.790 }, 00:20:13.790 "memory_domains": [ 00:20:13.790 { 00:20:13.790 "dma_device_id": "system", 00:20:13.790 "dma_device_type": 1 00:20:13.790 }, 00:20:13.790 { 00:20:13.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.790 "dma_device_type": 2 00:20:13.790 } 00:20:13.790 ], 00:20:13.790 "driver_specific": {} 00:20:13.790 } 00:20:13.790 ] 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.790 BaseBdev3 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:13.790 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:13.791 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:13.791 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:13.791 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:13.791 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.791 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.791 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.791 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:13.791 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.791 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.791 [ 00:20:13.791 { 00:20:13.791 "name": "BaseBdev3", 00:20:13.791 "aliases": [ 00:20:13.791 "9165d580-79a1-4dc2-96b6-a411ece7aa44" 00:20:13.791 ], 00:20:13.791 "product_name": "Malloc disk", 00:20:13.791 "block_size": 512, 00:20:13.791 "num_blocks": 65536, 00:20:13.791 "uuid": "9165d580-79a1-4dc2-96b6-a411ece7aa44", 00:20:13.791 "assigned_rate_limits": { 00:20:13.791 "rw_ios_per_sec": 0, 00:20:13.791 "rw_mbytes_per_sec": 0, 00:20:13.791 "r_mbytes_per_sec": 0, 00:20:13.791 "w_mbytes_per_sec": 0 00:20:13.791 }, 00:20:13.791 "claimed": false, 00:20:13.791 "zoned": false, 00:20:13.791 "supported_io_types": { 00:20:13.791 "read": true, 00:20:13.791 "write": true, 00:20:13.791 "unmap": true, 00:20:13.791 "flush": true, 00:20:13.791 "reset": true, 00:20:13.791 "nvme_admin": false, 00:20:13.791 "nvme_io": false, 00:20:13.791 "nvme_io_md": false, 00:20:13.791 "write_zeroes": true, 00:20:13.791 "zcopy": true, 00:20:13.791 "get_zone_info": false, 00:20:13.791 "zone_management": false, 00:20:13.791 "zone_append": false, 00:20:13.791 "compare": false, 00:20:13.791 "compare_and_write": false, 00:20:13.791 "abort": true, 00:20:13.791 "seek_hole": false, 00:20:13.791 "seek_data": false, 00:20:13.791 "copy": true, 00:20:13.791 "nvme_iov_md": false 00:20:13.791 }, 00:20:13.791 "memory_domains": [ 00:20:13.791 { 00:20:13.791 "dma_device_id": "system", 00:20:13.791 "dma_device_type": 1 00:20:13.791 }, 00:20:13.791 { 00:20:13.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.791 "dma_device_type": 2 00:20:13.791 } 00:20:13.791 ], 00:20:13.791 "driver_specific": {} 00:20:13.791 } 00:20:13.791 ] 00:20:13.791 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.791 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:13.791 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:13.791 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:13.791 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:13.791 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.791 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.791 [2024-12-06 06:46:32.432874] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:13.791 [2024-12-06 06:46:32.432939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:13.791 [2024-12-06 06:46:32.432969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:13.791 [2024-12-06 06:46:32.435341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:14.050 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.050 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:14.050 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:14.050 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:14.050 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:14.050 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:14.050 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:14.050 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:14.050 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:14.050 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:14.050 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.050 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:14.050 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.050 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.050 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.050 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.050 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.050 "name": "Existed_Raid", 00:20:14.050 "uuid": "c050a58a-c7e4-4a3b-9609-8ac37a31d6f8", 00:20:14.050 "strip_size_kb": 64, 00:20:14.050 "state": "configuring", 00:20:14.050 "raid_level": "raid5f", 00:20:14.050 "superblock": true, 00:20:14.050 "num_base_bdevs": 3, 00:20:14.050 "num_base_bdevs_discovered": 2, 00:20:14.050 "num_base_bdevs_operational": 3, 00:20:14.050 "base_bdevs_list": [ 00:20:14.050 { 00:20:14.050 "name": "BaseBdev1", 00:20:14.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.050 "is_configured": false, 00:20:14.050 "data_offset": 0, 00:20:14.050 "data_size": 0 00:20:14.050 }, 00:20:14.050 { 00:20:14.050 "name": "BaseBdev2", 00:20:14.050 "uuid": "cfcfea05-658f-4438-9f06-619948571b94", 00:20:14.050 "is_configured": true, 00:20:14.050 "data_offset": 2048, 00:20:14.050 "data_size": 63488 00:20:14.050 }, 00:20:14.050 { 00:20:14.050 "name": "BaseBdev3", 00:20:14.050 "uuid": "9165d580-79a1-4dc2-96b6-a411ece7aa44", 00:20:14.050 "is_configured": true, 00:20:14.050 "data_offset": 2048, 00:20:14.050 "data_size": 63488 00:20:14.050 } 00:20:14.050 ] 00:20:14.050 }' 00:20:14.050 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.050 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.313 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:14.313 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.313 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.313 [2024-12-06 06:46:32.937040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:14.313 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.313 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:14.313 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:14.313 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:14.313 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:14.313 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:14.313 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:14.313 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:14.313 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:14.313 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:14.313 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.313 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:14.313 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.313 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.313 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.582 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.582 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.582 "name": "Existed_Raid", 00:20:14.582 "uuid": "c050a58a-c7e4-4a3b-9609-8ac37a31d6f8", 00:20:14.582 "strip_size_kb": 64, 00:20:14.582 "state": "configuring", 00:20:14.582 "raid_level": "raid5f", 00:20:14.582 "superblock": true, 00:20:14.582 "num_base_bdevs": 3, 00:20:14.582 "num_base_bdevs_discovered": 1, 00:20:14.582 "num_base_bdevs_operational": 3, 00:20:14.582 "base_bdevs_list": [ 00:20:14.582 { 00:20:14.582 "name": "BaseBdev1", 00:20:14.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.582 "is_configured": false, 00:20:14.582 "data_offset": 0, 00:20:14.582 "data_size": 0 00:20:14.582 }, 00:20:14.582 { 00:20:14.582 "name": null, 00:20:14.582 "uuid": "cfcfea05-658f-4438-9f06-619948571b94", 00:20:14.582 "is_configured": false, 00:20:14.582 "data_offset": 0, 00:20:14.582 "data_size": 63488 00:20:14.582 }, 00:20:14.582 { 00:20:14.582 "name": "BaseBdev3", 00:20:14.582 "uuid": "9165d580-79a1-4dc2-96b6-a411ece7aa44", 00:20:14.582 "is_configured": true, 00:20:14.582 "data_offset": 2048, 00:20:14.582 "data_size": 63488 00:20:14.582 } 00:20:14.582 ] 00:20:14.582 }' 00:20:14.582 06:46:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.582 06:46:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.841 06:46:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.841 06:46:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:14.841 06:46:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.841 06:46:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.101 [2024-12-06 06:46:33.563062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:15.101 BaseBdev1 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.101 [ 00:20:15.101 { 00:20:15.101 "name": "BaseBdev1", 00:20:15.101 "aliases": [ 00:20:15.101 "01e8eaad-3235-4160-b920-728c325586f5" 00:20:15.101 ], 00:20:15.101 "product_name": "Malloc disk", 00:20:15.101 "block_size": 512, 00:20:15.101 "num_blocks": 65536, 00:20:15.101 "uuid": "01e8eaad-3235-4160-b920-728c325586f5", 00:20:15.101 "assigned_rate_limits": { 00:20:15.101 "rw_ios_per_sec": 0, 00:20:15.101 "rw_mbytes_per_sec": 0, 00:20:15.101 "r_mbytes_per_sec": 0, 00:20:15.101 "w_mbytes_per_sec": 0 00:20:15.101 }, 00:20:15.101 "claimed": true, 00:20:15.101 "claim_type": "exclusive_write", 00:20:15.101 "zoned": false, 00:20:15.101 "supported_io_types": { 00:20:15.101 "read": true, 00:20:15.101 "write": true, 00:20:15.101 "unmap": true, 00:20:15.101 "flush": true, 00:20:15.101 "reset": true, 00:20:15.101 "nvme_admin": false, 00:20:15.101 "nvme_io": false, 00:20:15.101 "nvme_io_md": false, 00:20:15.101 "write_zeroes": true, 00:20:15.101 "zcopy": true, 00:20:15.101 "get_zone_info": false, 00:20:15.101 "zone_management": false, 00:20:15.101 "zone_append": false, 00:20:15.101 "compare": false, 00:20:15.101 "compare_and_write": false, 00:20:15.101 "abort": true, 00:20:15.101 "seek_hole": false, 00:20:15.101 "seek_data": false, 00:20:15.101 "copy": true, 00:20:15.101 "nvme_iov_md": false 00:20:15.101 }, 00:20:15.101 "memory_domains": [ 00:20:15.101 { 00:20:15.101 "dma_device_id": "system", 00:20:15.101 "dma_device_type": 1 00:20:15.101 }, 00:20:15.101 { 00:20:15.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.101 "dma_device_type": 2 00:20:15.101 } 00:20:15.101 ], 00:20:15.101 "driver_specific": {} 00:20:15.101 } 00:20:15.101 ] 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.101 "name": "Existed_Raid", 00:20:15.101 "uuid": "c050a58a-c7e4-4a3b-9609-8ac37a31d6f8", 00:20:15.101 "strip_size_kb": 64, 00:20:15.101 "state": "configuring", 00:20:15.101 "raid_level": "raid5f", 00:20:15.101 "superblock": true, 00:20:15.101 "num_base_bdevs": 3, 00:20:15.101 "num_base_bdevs_discovered": 2, 00:20:15.101 "num_base_bdevs_operational": 3, 00:20:15.101 "base_bdevs_list": [ 00:20:15.101 { 00:20:15.101 "name": "BaseBdev1", 00:20:15.101 "uuid": "01e8eaad-3235-4160-b920-728c325586f5", 00:20:15.101 "is_configured": true, 00:20:15.101 "data_offset": 2048, 00:20:15.101 "data_size": 63488 00:20:15.101 }, 00:20:15.101 { 00:20:15.101 "name": null, 00:20:15.101 "uuid": "cfcfea05-658f-4438-9f06-619948571b94", 00:20:15.101 "is_configured": false, 00:20:15.101 "data_offset": 0, 00:20:15.101 "data_size": 63488 00:20:15.101 }, 00:20:15.101 { 00:20:15.101 "name": "BaseBdev3", 00:20:15.101 "uuid": "9165d580-79a1-4dc2-96b6-a411ece7aa44", 00:20:15.101 "is_configured": true, 00:20:15.101 "data_offset": 2048, 00:20:15.101 "data_size": 63488 00:20:15.101 } 00:20:15.101 ] 00:20:15.101 }' 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.101 06:46:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.671 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:15.671 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.671 06:46:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.671 06:46:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.671 06:46:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.671 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:15.671 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:15.671 06:46:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.671 06:46:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.671 [2024-12-06 06:46:34.187322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:15.671 06:46:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.671 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:15.671 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:15.671 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:15.671 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:15.671 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:15.671 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:15.671 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.671 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.671 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.671 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.671 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.671 06:46:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.671 06:46:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.671 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:15.671 06:46:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.671 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.671 "name": "Existed_Raid", 00:20:15.671 "uuid": "c050a58a-c7e4-4a3b-9609-8ac37a31d6f8", 00:20:15.671 "strip_size_kb": 64, 00:20:15.671 "state": "configuring", 00:20:15.671 "raid_level": "raid5f", 00:20:15.671 "superblock": true, 00:20:15.671 "num_base_bdevs": 3, 00:20:15.671 "num_base_bdevs_discovered": 1, 00:20:15.671 "num_base_bdevs_operational": 3, 00:20:15.671 "base_bdevs_list": [ 00:20:15.671 { 00:20:15.671 "name": "BaseBdev1", 00:20:15.671 "uuid": "01e8eaad-3235-4160-b920-728c325586f5", 00:20:15.671 "is_configured": true, 00:20:15.671 "data_offset": 2048, 00:20:15.671 "data_size": 63488 00:20:15.671 }, 00:20:15.671 { 00:20:15.671 "name": null, 00:20:15.671 "uuid": "cfcfea05-658f-4438-9f06-619948571b94", 00:20:15.671 "is_configured": false, 00:20:15.671 "data_offset": 0, 00:20:15.671 "data_size": 63488 00:20:15.671 }, 00:20:15.671 { 00:20:15.671 "name": null, 00:20:15.671 "uuid": "9165d580-79a1-4dc2-96b6-a411ece7aa44", 00:20:15.671 "is_configured": false, 00:20:15.671 "data_offset": 0, 00:20:15.671 "data_size": 63488 00:20:15.671 } 00:20:15.671 ] 00:20:15.671 }' 00:20:15.671 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.671 06:46:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.240 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.240 06:46:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.240 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:16.240 06:46:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.240 06:46:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.240 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:16.240 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:16.240 06:46:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.240 06:46:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.240 [2024-12-06 06:46:34.743509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:16.240 06:46:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.240 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:16.240 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:16.240 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:16.240 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:16.240 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:16.240 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:16.240 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.240 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.240 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.240 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.240 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.240 06:46:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.240 06:46:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.240 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.240 06:46:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.240 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:16.240 "name": "Existed_Raid", 00:20:16.240 "uuid": "c050a58a-c7e4-4a3b-9609-8ac37a31d6f8", 00:20:16.240 "strip_size_kb": 64, 00:20:16.240 "state": "configuring", 00:20:16.240 "raid_level": "raid5f", 00:20:16.240 "superblock": true, 00:20:16.240 "num_base_bdevs": 3, 00:20:16.240 "num_base_bdevs_discovered": 2, 00:20:16.240 "num_base_bdevs_operational": 3, 00:20:16.240 "base_bdevs_list": [ 00:20:16.240 { 00:20:16.240 "name": "BaseBdev1", 00:20:16.240 "uuid": "01e8eaad-3235-4160-b920-728c325586f5", 00:20:16.240 "is_configured": true, 00:20:16.240 "data_offset": 2048, 00:20:16.240 "data_size": 63488 00:20:16.240 }, 00:20:16.240 { 00:20:16.240 "name": null, 00:20:16.240 "uuid": "cfcfea05-658f-4438-9f06-619948571b94", 00:20:16.240 "is_configured": false, 00:20:16.241 "data_offset": 0, 00:20:16.241 "data_size": 63488 00:20:16.241 }, 00:20:16.241 { 00:20:16.241 "name": "BaseBdev3", 00:20:16.241 "uuid": "9165d580-79a1-4dc2-96b6-a411ece7aa44", 00:20:16.241 "is_configured": true, 00:20:16.241 "data_offset": 2048, 00:20:16.241 "data_size": 63488 00:20:16.241 } 00:20:16.241 ] 00:20:16.241 }' 00:20:16.241 06:46:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:16.241 06:46:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.808 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.808 06:46:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.808 06:46:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.808 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:16.808 06:46:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.808 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:16.808 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:16.808 06:46:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.808 06:46:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.808 [2024-12-06 06:46:35.303678] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:16.808 06:46:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.809 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:16.809 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:16.809 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:16.809 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:16.809 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:16.809 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:16.809 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.809 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.809 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.809 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.809 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.809 06:46:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.809 06:46:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.809 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.809 06:46:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.809 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:16.809 "name": "Existed_Raid", 00:20:16.809 "uuid": "c050a58a-c7e4-4a3b-9609-8ac37a31d6f8", 00:20:16.809 "strip_size_kb": 64, 00:20:16.809 "state": "configuring", 00:20:16.809 "raid_level": "raid5f", 00:20:16.809 "superblock": true, 00:20:16.809 "num_base_bdevs": 3, 00:20:16.809 "num_base_bdevs_discovered": 1, 00:20:16.809 "num_base_bdevs_operational": 3, 00:20:16.809 "base_bdevs_list": [ 00:20:16.809 { 00:20:16.809 "name": null, 00:20:16.809 "uuid": "01e8eaad-3235-4160-b920-728c325586f5", 00:20:16.809 "is_configured": false, 00:20:16.809 "data_offset": 0, 00:20:16.809 "data_size": 63488 00:20:16.809 }, 00:20:16.809 { 00:20:16.809 "name": null, 00:20:16.809 "uuid": "cfcfea05-658f-4438-9f06-619948571b94", 00:20:16.809 "is_configured": false, 00:20:16.809 "data_offset": 0, 00:20:16.809 "data_size": 63488 00:20:16.809 }, 00:20:16.809 { 00:20:16.809 "name": "BaseBdev3", 00:20:16.809 "uuid": "9165d580-79a1-4dc2-96b6-a411ece7aa44", 00:20:16.809 "is_configured": true, 00:20:16.809 "data_offset": 2048, 00:20:16.809 "data_size": 63488 00:20:16.809 } 00:20:16.809 ] 00:20:16.809 }' 00:20:16.809 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:16.809 06:46:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.375 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:17.375 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.375 06:46:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.375 06:46:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.375 06:46:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.375 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:17.375 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:17.375 06:46:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.375 06:46:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.375 [2024-12-06 06:46:35.960414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:17.375 06:46:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.375 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:17.375 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:17.375 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:17.375 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:17.375 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:17.375 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:17.375 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.375 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.375 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.375 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.375 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.375 06:46:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.375 06:46:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.375 06:46:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.375 06:46:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.375 06:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.375 "name": "Existed_Raid", 00:20:17.375 "uuid": "c050a58a-c7e4-4a3b-9609-8ac37a31d6f8", 00:20:17.375 "strip_size_kb": 64, 00:20:17.375 "state": "configuring", 00:20:17.375 "raid_level": "raid5f", 00:20:17.375 "superblock": true, 00:20:17.375 "num_base_bdevs": 3, 00:20:17.375 "num_base_bdevs_discovered": 2, 00:20:17.375 "num_base_bdevs_operational": 3, 00:20:17.375 "base_bdevs_list": [ 00:20:17.375 { 00:20:17.375 "name": null, 00:20:17.375 "uuid": "01e8eaad-3235-4160-b920-728c325586f5", 00:20:17.375 "is_configured": false, 00:20:17.375 "data_offset": 0, 00:20:17.375 "data_size": 63488 00:20:17.375 }, 00:20:17.375 { 00:20:17.375 "name": "BaseBdev2", 00:20:17.375 "uuid": "cfcfea05-658f-4438-9f06-619948571b94", 00:20:17.375 "is_configured": true, 00:20:17.375 "data_offset": 2048, 00:20:17.375 "data_size": 63488 00:20:17.375 }, 00:20:17.375 { 00:20:17.375 "name": "BaseBdev3", 00:20:17.375 "uuid": "9165d580-79a1-4dc2-96b6-a411ece7aa44", 00:20:17.375 "is_configured": true, 00:20:17.375 "data_offset": 2048, 00:20:17.375 "data_size": 63488 00:20:17.375 } 00:20:17.375 ] 00:20:17.375 }' 00:20:17.375 06:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.375 06:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.942 06:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.942 06:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:17.942 06:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.942 06:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.942 06:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.942 06:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:17.942 06:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.942 06:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:17.942 06:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.942 06:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.942 06:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.942 06:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 01e8eaad-3235-4160-b920-728c325586f5 00:20:17.942 06:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.942 06:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.202 [2024-12-06 06:46:36.606317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:18.202 [2024-12-06 06:46:36.606639] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:18.202 [2024-12-06 06:46:36.606672] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:18.202 NewBaseBdev 00:20:18.202 [2024-12-06 06:46:36.607048] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.202 [2024-12-06 06:46:36.612554] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:18.202 [2024-12-06 06:46:36.612582] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:18.202 [2024-12-06 06:46:36.612897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.202 [ 00:20:18.202 { 00:20:18.202 "name": "NewBaseBdev", 00:20:18.202 "aliases": [ 00:20:18.202 "01e8eaad-3235-4160-b920-728c325586f5" 00:20:18.202 ], 00:20:18.202 "product_name": "Malloc disk", 00:20:18.202 "block_size": 512, 00:20:18.202 "num_blocks": 65536, 00:20:18.202 "uuid": "01e8eaad-3235-4160-b920-728c325586f5", 00:20:18.202 "assigned_rate_limits": { 00:20:18.202 "rw_ios_per_sec": 0, 00:20:18.202 "rw_mbytes_per_sec": 0, 00:20:18.202 "r_mbytes_per_sec": 0, 00:20:18.202 "w_mbytes_per_sec": 0 00:20:18.202 }, 00:20:18.202 "claimed": true, 00:20:18.202 "claim_type": "exclusive_write", 00:20:18.202 "zoned": false, 00:20:18.202 "supported_io_types": { 00:20:18.202 "read": true, 00:20:18.202 "write": true, 00:20:18.202 "unmap": true, 00:20:18.202 "flush": true, 00:20:18.202 "reset": true, 00:20:18.202 "nvme_admin": false, 00:20:18.202 "nvme_io": false, 00:20:18.202 "nvme_io_md": false, 00:20:18.202 "write_zeroes": true, 00:20:18.202 "zcopy": true, 00:20:18.202 "get_zone_info": false, 00:20:18.202 "zone_management": false, 00:20:18.202 "zone_append": false, 00:20:18.202 "compare": false, 00:20:18.202 "compare_and_write": false, 00:20:18.202 "abort": true, 00:20:18.202 "seek_hole": false, 00:20:18.202 "seek_data": false, 00:20:18.202 "copy": true, 00:20:18.202 "nvme_iov_md": false 00:20:18.202 }, 00:20:18.202 "memory_domains": [ 00:20:18.202 { 00:20:18.202 "dma_device_id": "system", 00:20:18.202 "dma_device_type": 1 00:20:18.202 }, 00:20:18.202 { 00:20:18.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:18.202 "dma_device_type": 2 00:20:18.202 } 00:20:18.202 ], 00:20:18.202 "driver_specific": {} 00:20:18.202 } 00:20:18.202 ] 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.202 "name": "Existed_Raid", 00:20:18.202 "uuid": "c050a58a-c7e4-4a3b-9609-8ac37a31d6f8", 00:20:18.202 "strip_size_kb": 64, 00:20:18.202 "state": "online", 00:20:18.202 "raid_level": "raid5f", 00:20:18.202 "superblock": true, 00:20:18.202 "num_base_bdevs": 3, 00:20:18.202 "num_base_bdevs_discovered": 3, 00:20:18.202 "num_base_bdevs_operational": 3, 00:20:18.202 "base_bdevs_list": [ 00:20:18.202 { 00:20:18.202 "name": "NewBaseBdev", 00:20:18.202 "uuid": "01e8eaad-3235-4160-b920-728c325586f5", 00:20:18.202 "is_configured": true, 00:20:18.202 "data_offset": 2048, 00:20:18.202 "data_size": 63488 00:20:18.202 }, 00:20:18.202 { 00:20:18.202 "name": "BaseBdev2", 00:20:18.202 "uuid": "cfcfea05-658f-4438-9f06-619948571b94", 00:20:18.202 "is_configured": true, 00:20:18.202 "data_offset": 2048, 00:20:18.202 "data_size": 63488 00:20:18.202 }, 00:20:18.202 { 00:20:18.202 "name": "BaseBdev3", 00:20:18.202 "uuid": "9165d580-79a1-4dc2-96b6-a411ece7aa44", 00:20:18.202 "is_configured": true, 00:20:18.202 "data_offset": 2048, 00:20:18.202 "data_size": 63488 00:20:18.202 } 00:20:18.202 ] 00:20:18.202 }' 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.202 06:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.770 06:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:18.770 06:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:18.770 06:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:18.770 06:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:18.770 06:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:18.770 06:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:18.770 06:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:18.770 06:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:18.770 06:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.770 06:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.770 [2024-12-06 06:46:37.194958] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:18.770 06:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.770 06:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:18.770 "name": "Existed_Raid", 00:20:18.770 "aliases": [ 00:20:18.770 "c050a58a-c7e4-4a3b-9609-8ac37a31d6f8" 00:20:18.770 ], 00:20:18.770 "product_name": "Raid Volume", 00:20:18.770 "block_size": 512, 00:20:18.770 "num_blocks": 126976, 00:20:18.770 "uuid": "c050a58a-c7e4-4a3b-9609-8ac37a31d6f8", 00:20:18.770 "assigned_rate_limits": { 00:20:18.770 "rw_ios_per_sec": 0, 00:20:18.770 "rw_mbytes_per_sec": 0, 00:20:18.770 "r_mbytes_per_sec": 0, 00:20:18.770 "w_mbytes_per_sec": 0 00:20:18.770 }, 00:20:18.770 "claimed": false, 00:20:18.770 "zoned": false, 00:20:18.770 "supported_io_types": { 00:20:18.770 "read": true, 00:20:18.770 "write": true, 00:20:18.770 "unmap": false, 00:20:18.770 "flush": false, 00:20:18.770 "reset": true, 00:20:18.770 "nvme_admin": false, 00:20:18.770 "nvme_io": false, 00:20:18.770 "nvme_io_md": false, 00:20:18.770 "write_zeroes": true, 00:20:18.770 "zcopy": false, 00:20:18.770 "get_zone_info": false, 00:20:18.770 "zone_management": false, 00:20:18.770 "zone_append": false, 00:20:18.770 "compare": false, 00:20:18.770 "compare_and_write": false, 00:20:18.770 "abort": false, 00:20:18.770 "seek_hole": false, 00:20:18.770 "seek_data": false, 00:20:18.770 "copy": false, 00:20:18.770 "nvme_iov_md": false 00:20:18.770 }, 00:20:18.770 "driver_specific": { 00:20:18.770 "raid": { 00:20:18.770 "uuid": "c050a58a-c7e4-4a3b-9609-8ac37a31d6f8", 00:20:18.770 "strip_size_kb": 64, 00:20:18.770 "state": "online", 00:20:18.770 "raid_level": "raid5f", 00:20:18.770 "superblock": true, 00:20:18.770 "num_base_bdevs": 3, 00:20:18.770 "num_base_bdevs_discovered": 3, 00:20:18.770 "num_base_bdevs_operational": 3, 00:20:18.770 "base_bdevs_list": [ 00:20:18.770 { 00:20:18.770 "name": "NewBaseBdev", 00:20:18.770 "uuid": "01e8eaad-3235-4160-b920-728c325586f5", 00:20:18.770 "is_configured": true, 00:20:18.770 "data_offset": 2048, 00:20:18.770 "data_size": 63488 00:20:18.770 }, 00:20:18.770 { 00:20:18.770 "name": "BaseBdev2", 00:20:18.770 "uuid": "cfcfea05-658f-4438-9f06-619948571b94", 00:20:18.770 "is_configured": true, 00:20:18.770 "data_offset": 2048, 00:20:18.770 "data_size": 63488 00:20:18.770 }, 00:20:18.770 { 00:20:18.770 "name": "BaseBdev3", 00:20:18.770 "uuid": "9165d580-79a1-4dc2-96b6-a411ece7aa44", 00:20:18.770 "is_configured": true, 00:20:18.770 "data_offset": 2048, 00:20:18.770 "data_size": 63488 00:20:18.770 } 00:20:18.770 ] 00:20:18.770 } 00:20:18.770 } 00:20:18.770 }' 00:20:18.770 06:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:18.771 06:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:18.771 BaseBdev2 00:20:18.771 BaseBdev3' 00:20:18.771 06:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:18.771 06:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:18.771 06:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:18.771 06:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:18.771 06:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:18.771 06:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.771 06:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.771 06:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.771 06:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:18.771 06:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:18.771 06:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:18.771 06:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:18.771 06:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.771 06:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.771 06:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:18.771 06:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.029 06:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:19.029 06:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:19.029 06:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:19.029 06:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:19.029 06:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:19.029 06:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.029 06:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.029 06:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.029 06:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:19.029 06:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:19.029 06:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:19.029 06:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.029 06:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.029 [2024-12-06 06:46:37.474729] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:19.029 [2024-12-06 06:46:37.474763] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:19.029 [2024-12-06 06:46:37.474858] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:19.029 [2024-12-06 06:46:37.475242] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:19.029 [2024-12-06 06:46:37.475266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:19.029 06:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.029 06:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81041 00:20:19.029 06:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81041 ']' 00:20:19.029 06:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 81041 00:20:19.029 06:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:20:19.029 06:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.029 06:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81041 00:20:19.029 killing process with pid 81041 00:20:19.029 06:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:19.029 06:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:19.029 06:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81041' 00:20:19.029 06:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 81041 00:20:19.029 [2024-12-06 06:46:37.505445] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:19.029 06:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 81041 00:20:19.286 [2024-12-06 06:46:37.776773] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:20.219 ************************************ 00:20:20.219 END TEST raid5f_state_function_test_sb 00:20:20.219 ************************************ 00:20:20.219 06:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:20:20.219 00:20:20.219 real 0m11.491s 00:20:20.219 user 0m19.052s 00:20:20.219 sys 0m1.554s 00:20:20.219 06:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:20.219 06:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.478 06:46:38 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:20:20.478 06:46:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:20.478 06:46:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:20.478 06:46:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:20.478 ************************************ 00:20:20.478 START TEST raid5f_superblock_test 00:20:20.478 ************************************ 00:20:20.478 06:46:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:20:20.478 06:46:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:20:20.478 06:46:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:20:20.478 06:46:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:20.478 06:46:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:20.478 06:46:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:20.478 06:46:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:20.478 06:46:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:20.478 06:46:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:20.478 06:46:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:20.478 06:46:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:20.478 06:46:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:20.478 06:46:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:20.478 06:46:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:20.478 06:46:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:20:20.478 06:46:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:20:20.478 06:46:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:20:20.478 06:46:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81665 00:20:20.478 06:46:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:20.478 06:46:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81665 00:20:20.478 06:46:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81665 ']' 00:20:20.478 06:46:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.478 06:46:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:20.478 06:46:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.478 06:46:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:20.478 06:46:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.479 [2024-12-06 06:46:38.964419] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:20:20.479 [2024-12-06 06:46:38.964938] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81665 ] 00:20:20.737 [2024-12-06 06:46:39.146015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.737 [2024-12-06 06:46:39.301099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.995 [2024-12-06 06:46:39.537860] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:20.995 [2024-12-06 06:46:39.537940] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:21.564 06:46:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.564 06:46:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:20:21.564 06:46:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:21.564 06:46:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:21.564 06:46:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:21.564 06:46:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:21.564 06:46:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:21.564 06:46:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:21.564 06:46:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:21.564 06:46:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:21.564 06:46:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:20:21.564 06:46:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.564 06:46:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.564 malloc1 00:20:21.564 06:46:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.564 06:46:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:21.564 06:46:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.564 06:46:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.564 [2024-12-06 06:46:39.994947] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:21.564 [2024-12-06 06:46:39.995027] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.564 [2024-12-06 06:46:39.995074] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:21.564 [2024-12-06 06:46:39.995099] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.564 [2024-12-06 06:46:39.998036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.564 [2024-12-06 06:46:39.998219] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:21.564 pt1 00:20:21.564 06:46:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.564 06:46:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:21.564 06:46:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:21.564 06:46:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:21.564 06:46:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:21.564 06:46:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:21.564 06:46:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:21.564 06:46:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:21.564 06:46:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:21.564 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:20:21.564 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.564 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.564 malloc2 00:20:21.564 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.564 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:21.564 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.564 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.564 [2024-12-06 06:46:40.050808] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:21.564 [2024-12-06 06:46:40.051034] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.564 [2024-12-06 06:46:40.051142] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:21.564 [2024-12-06 06:46:40.051301] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.564 [2024-12-06 06:46:40.054113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.564 [2024-12-06 06:46:40.054282] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:21.564 pt2 00:20:21.564 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.564 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:21.564 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:21.564 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:20:21.564 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:20:21.564 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:21.564 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:21.564 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:21.564 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:21.564 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:20:21.564 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.564 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.564 malloc3 00:20:21.564 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.564 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:21.564 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.564 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.564 [2024-12-06 06:46:40.120556] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:21.564 [2024-12-06 06:46:40.120777] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.564 [2024-12-06 06:46:40.120966] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:21.564 [2024-12-06 06:46:40.121129] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.564 [2024-12-06 06:46:40.124141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.564 [2024-12-06 06:46:40.124301] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:21.564 pt3 00:20:21.565 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.565 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:21.565 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:21.565 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:20:21.565 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.565 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.565 [2024-12-06 06:46:40.132749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:21.565 [2024-12-06 06:46:40.135225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:21.565 [2024-12-06 06:46:40.135481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:21.565 [2024-12-06 06:46:40.135840] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:21.565 [2024-12-06 06:46:40.135882] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:21.565 [2024-12-06 06:46:40.136348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:21.565 [2024-12-06 06:46:40.141737] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:21.565 [2024-12-06 06:46:40.141871] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:21.565 [2024-12-06 06:46:40.142194] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.565 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.565 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:21.565 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:21.565 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:21.565 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:21.565 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:21.565 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:21.565 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.565 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.565 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.565 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.565 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.565 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.565 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.565 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.565 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.565 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.565 "name": "raid_bdev1", 00:20:21.565 "uuid": "cb952432-856c-42ab-bcb4-30d00202b879", 00:20:21.565 "strip_size_kb": 64, 00:20:21.565 "state": "online", 00:20:21.565 "raid_level": "raid5f", 00:20:21.565 "superblock": true, 00:20:21.565 "num_base_bdevs": 3, 00:20:21.565 "num_base_bdevs_discovered": 3, 00:20:21.565 "num_base_bdevs_operational": 3, 00:20:21.565 "base_bdevs_list": [ 00:20:21.565 { 00:20:21.565 "name": "pt1", 00:20:21.565 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:21.565 "is_configured": true, 00:20:21.565 "data_offset": 2048, 00:20:21.565 "data_size": 63488 00:20:21.565 }, 00:20:21.565 { 00:20:21.565 "name": "pt2", 00:20:21.565 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:21.565 "is_configured": true, 00:20:21.565 "data_offset": 2048, 00:20:21.565 "data_size": 63488 00:20:21.565 }, 00:20:21.565 { 00:20:21.565 "name": "pt3", 00:20:21.565 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:21.565 "is_configured": true, 00:20:21.565 "data_offset": 2048, 00:20:21.565 "data_size": 63488 00:20:21.565 } 00:20:21.565 ] 00:20:21.565 }' 00:20:21.565 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.565 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.132 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:22.132 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:22.132 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:22.132 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:22.132 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:22.132 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:22.132 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:22.132 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:22.132 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.132 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.132 [2024-12-06 06:46:40.649216] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:22.132 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.132 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:22.132 "name": "raid_bdev1", 00:20:22.132 "aliases": [ 00:20:22.132 "cb952432-856c-42ab-bcb4-30d00202b879" 00:20:22.132 ], 00:20:22.132 "product_name": "Raid Volume", 00:20:22.132 "block_size": 512, 00:20:22.132 "num_blocks": 126976, 00:20:22.132 "uuid": "cb952432-856c-42ab-bcb4-30d00202b879", 00:20:22.132 "assigned_rate_limits": { 00:20:22.132 "rw_ios_per_sec": 0, 00:20:22.132 "rw_mbytes_per_sec": 0, 00:20:22.132 "r_mbytes_per_sec": 0, 00:20:22.132 "w_mbytes_per_sec": 0 00:20:22.132 }, 00:20:22.132 "claimed": false, 00:20:22.132 "zoned": false, 00:20:22.132 "supported_io_types": { 00:20:22.132 "read": true, 00:20:22.132 "write": true, 00:20:22.132 "unmap": false, 00:20:22.132 "flush": false, 00:20:22.132 "reset": true, 00:20:22.132 "nvme_admin": false, 00:20:22.132 "nvme_io": false, 00:20:22.132 "nvme_io_md": false, 00:20:22.132 "write_zeroes": true, 00:20:22.132 "zcopy": false, 00:20:22.133 "get_zone_info": false, 00:20:22.133 "zone_management": false, 00:20:22.133 "zone_append": false, 00:20:22.133 "compare": false, 00:20:22.133 "compare_and_write": false, 00:20:22.133 "abort": false, 00:20:22.133 "seek_hole": false, 00:20:22.133 "seek_data": false, 00:20:22.133 "copy": false, 00:20:22.133 "nvme_iov_md": false 00:20:22.133 }, 00:20:22.133 "driver_specific": { 00:20:22.133 "raid": { 00:20:22.133 "uuid": "cb952432-856c-42ab-bcb4-30d00202b879", 00:20:22.133 "strip_size_kb": 64, 00:20:22.133 "state": "online", 00:20:22.133 "raid_level": "raid5f", 00:20:22.133 "superblock": true, 00:20:22.133 "num_base_bdevs": 3, 00:20:22.133 "num_base_bdevs_discovered": 3, 00:20:22.133 "num_base_bdevs_operational": 3, 00:20:22.133 "base_bdevs_list": [ 00:20:22.133 { 00:20:22.133 "name": "pt1", 00:20:22.133 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:22.133 "is_configured": true, 00:20:22.133 "data_offset": 2048, 00:20:22.133 "data_size": 63488 00:20:22.133 }, 00:20:22.133 { 00:20:22.133 "name": "pt2", 00:20:22.133 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:22.133 "is_configured": true, 00:20:22.133 "data_offset": 2048, 00:20:22.133 "data_size": 63488 00:20:22.133 }, 00:20:22.133 { 00:20:22.133 "name": "pt3", 00:20:22.133 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:22.133 "is_configured": true, 00:20:22.133 "data_offset": 2048, 00:20:22.133 "data_size": 63488 00:20:22.133 } 00:20:22.133 ] 00:20:22.133 } 00:20:22.133 } 00:20:22.133 }' 00:20:22.133 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:22.133 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:22.133 pt2 00:20:22.133 pt3' 00:20:22.133 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:22.391 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:22.391 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:22.391 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:22.391 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.391 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.391 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:22.391 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.391 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:22.391 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:22.391 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:22.392 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:22.392 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:22.392 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.392 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.392 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.392 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:22.392 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:22.392 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:22.392 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:22.392 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:22.392 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.392 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.392 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.392 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:22.392 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:22.392 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:22.392 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.392 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.392 06:46:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:22.392 [2024-12-06 06:46:40.953271] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:22.392 06:46:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.392 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cb952432-856c-42ab-bcb4-30d00202b879 00:20:22.392 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z cb952432-856c-42ab-bcb4-30d00202b879 ']' 00:20:22.392 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:22.392 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.392 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.649 [2024-12-06 06:46:41.037065] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:22.649 [2024-12-06 06:46:41.037103] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:22.649 [2024-12-06 06:46:41.037203] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:22.649 [2024-12-06 06:46:41.037306] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:22.649 [2024-12-06 06:46:41.037323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.649 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.649 [2024-12-06 06:46:41.189156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:22.649 [2024-12-06 06:46:41.191659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:22.649 [2024-12-06 06:46:41.191736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:22.650 [2024-12-06 06:46:41.191824] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:22.650 [2024-12-06 06:46:41.191898] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:22.650 [2024-12-06 06:46:41.191931] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:22.650 [2024-12-06 06:46:41.191958] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:22.650 [2024-12-06 06:46:41.191971] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:22.650 request: 00:20:22.650 { 00:20:22.650 "name": "raid_bdev1", 00:20:22.650 "raid_level": "raid5f", 00:20:22.650 "base_bdevs": [ 00:20:22.650 "malloc1", 00:20:22.650 "malloc2", 00:20:22.650 "malloc3" 00:20:22.650 ], 00:20:22.650 "strip_size_kb": 64, 00:20:22.650 "superblock": false, 00:20:22.650 "method": "bdev_raid_create", 00:20:22.650 "req_id": 1 00:20:22.650 } 00:20:22.650 Got JSON-RPC error response 00:20:22.650 response: 00:20:22.650 { 00:20:22.650 "code": -17, 00:20:22.650 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:22.650 } 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.650 [2024-12-06 06:46:41.253067] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:22.650 [2024-12-06 06:46:41.253234] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.650 [2024-12-06 06:46:41.253363] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:22.650 [2024-12-06 06:46:41.253479] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.650 [2024-12-06 06:46:41.256298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.650 [2024-12-06 06:46:41.256451] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:22.650 [2024-12-06 06:46:41.256658] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:22.650 [2024-12-06 06:46:41.256834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:22.650 pt1 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.650 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.939 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.939 "name": "raid_bdev1", 00:20:22.939 "uuid": "cb952432-856c-42ab-bcb4-30d00202b879", 00:20:22.939 "strip_size_kb": 64, 00:20:22.939 "state": "configuring", 00:20:22.939 "raid_level": "raid5f", 00:20:22.939 "superblock": true, 00:20:22.939 "num_base_bdevs": 3, 00:20:22.939 "num_base_bdevs_discovered": 1, 00:20:22.939 "num_base_bdevs_operational": 3, 00:20:22.939 "base_bdevs_list": [ 00:20:22.939 { 00:20:22.939 "name": "pt1", 00:20:22.939 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:22.939 "is_configured": true, 00:20:22.939 "data_offset": 2048, 00:20:22.939 "data_size": 63488 00:20:22.939 }, 00:20:22.939 { 00:20:22.939 "name": null, 00:20:22.939 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:22.939 "is_configured": false, 00:20:22.939 "data_offset": 2048, 00:20:22.939 "data_size": 63488 00:20:22.939 }, 00:20:22.939 { 00:20:22.939 "name": null, 00:20:22.939 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:22.939 "is_configured": false, 00:20:22.939 "data_offset": 2048, 00:20:22.939 "data_size": 63488 00:20:22.939 } 00:20:22.939 ] 00:20:22.939 }' 00:20:22.939 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.939 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.198 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:20:23.198 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:23.198 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.198 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.198 [2024-12-06 06:46:41.761323] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:23.198 [2024-12-06 06:46:41.761401] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.198 [2024-12-06 06:46:41.761436] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:23.198 [2024-12-06 06:46:41.761451] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.198 [2024-12-06 06:46:41.762012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.198 [2024-12-06 06:46:41.762063] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:23.198 [2024-12-06 06:46:41.762178] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:23.198 [2024-12-06 06:46:41.762218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:23.198 pt2 00:20:23.198 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.198 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:20:23.198 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.198 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.198 [2024-12-06 06:46:41.769290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:23.198 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.198 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:23.198 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:23.198 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:23.198 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:23.198 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:23.198 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:23.198 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.198 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.198 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.198 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.198 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.198 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.198 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.198 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.198 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.198 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.198 "name": "raid_bdev1", 00:20:23.198 "uuid": "cb952432-856c-42ab-bcb4-30d00202b879", 00:20:23.198 "strip_size_kb": 64, 00:20:23.198 "state": "configuring", 00:20:23.198 "raid_level": "raid5f", 00:20:23.198 "superblock": true, 00:20:23.198 "num_base_bdevs": 3, 00:20:23.198 "num_base_bdevs_discovered": 1, 00:20:23.198 "num_base_bdevs_operational": 3, 00:20:23.198 "base_bdevs_list": [ 00:20:23.198 { 00:20:23.198 "name": "pt1", 00:20:23.198 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:23.198 "is_configured": true, 00:20:23.198 "data_offset": 2048, 00:20:23.198 "data_size": 63488 00:20:23.198 }, 00:20:23.198 { 00:20:23.198 "name": null, 00:20:23.198 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:23.198 "is_configured": false, 00:20:23.198 "data_offset": 0, 00:20:23.198 "data_size": 63488 00:20:23.198 }, 00:20:23.198 { 00:20:23.198 "name": null, 00:20:23.198 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:23.198 "is_configured": false, 00:20:23.198 "data_offset": 2048, 00:20:23.198 "data_size": 63488 00:20:23.198 } 00:20:23.198 ] 00:20:23.198 }' 00:20:23.198 06:46:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.198 06:46:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.764 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:23.764 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:23.764 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:23.764 06:46:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.764 06:46:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.764 [2024-12-06 06:46:42.297423] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:23.764 [2024-12-06 06:46:42.297537] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.764 [2024-12-06 06:46:42.297568] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:23.764 [2024-12-06 06:46:42.297586] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.764 [2024-12-06 06:46:42.298166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.764 [2024-12-06 06:46:42.298204] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:23.764 [2024-12-06 06:46:42.298307] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:23.764 [2024-12-06 06:46:42.298345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:23.764 pt2 00:20:23.764 06:46:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.764 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:23.764 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:23.764 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:23.764 06:46:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.764 06:46:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.764 [2024-12-06 06:46:42.305390] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:23.764 [2024-12-06 06:46:42.305447] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.764 [2024-12-06 06:46:42.305469] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:23.764 [2024-12-06 06:46:42.305484] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.764 [2024-12-06 06:46:42.305955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.764 [2024-12-06 06:46:42.305994] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:23.764 [2024-12-06 06:46:42.306068] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:23.764 [2024-12-06 06:46:42.306100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:23.764 [2024-12-06 06:46:42.306259] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:23.764 [2024-12-06 06:46:42.306289] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:23.764 [2024-12-06 06:46:42.306613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:23.764 [2024-12-06 06:46:42.311467] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:23.764 [2024-12-06 06:46:42.311492] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:23.764 [2024-12-06 06:46:42.311728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:23.764 pt3 00:20:23.764 06:46:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.764 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:23.764 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:23.764 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:23.764 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:23.764 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:23.764 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:23.764 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:23.764 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:23.764 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.764 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.764 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.764 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.764 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.764 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.764 06:46:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.764 06:46:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.764 06:46:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.764 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.764 "name": "raid_bdev1", 00:20:23.764 "uuid": "cb952432-856c-42ab-bcb4-30d00202b879", 00:20:23.764 "strip_size_kb": 64, 00:20:23.764 "state": "online", 00:20:23.764 "raid_level": "raid5f", 00:20:23.764 "superblock": true, 00:20:23.764 "num_base_bdevs": 3, 00:20:23.764 "num_base_bdevs_discovered": 3, 00:20:23.764 "num_base_bdevs_operational": 3, 00:20:23.764 "base_bdevs_list": [ 00:20:23.764 { 00:20:23.764 "name": "pt1", 00:20:23.764 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:23.764 "is_configured": true, 00:20:23.764 "data_offset": 2048, 00:20:23.764 "data_size": 63488 00:20:23.765 }, 00:20:23.765 { 00:20:23.765 "name": "pt2", 00:20:23.765 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:23.765 "is_configured": true, 00:20:23.765 "data_offset": 2048, 00:20:23.765 "data_size": 63488 00:20:23.765 }, 00:20:23.765 { 00:20:23.765 "name": "pt3", 00:20:23.765 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:23.765 "is_configured": true, 00:20:23.765 "data_offset": 2048, 00:20:23.765 "data_size": 63488 00:20:23.765 } 00:20:23.765 ] 00:20:23.765 }' 00:20:23.765 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.765 06:46:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.333 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:24.333 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:24.333 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:24.333 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:24.333 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:24.333 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:24.333 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:24.333 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:24.333 06:46:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.333 06:46:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.333 [2024-12-06 06:46:42.865696] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:24.333 06:46:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.333 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:24.333 "name": "raid_bdev1", 00:20:24.333 "aliases": [ 00:20:24.333 "cb952432-856c-42ab-bcb4-30d00202b879" 00:20:24.333 ], 00:20:24.333 "product_name": "Raid Volume", 00:20:24.333 "block_size": 512, 00:20:24.333 "num_blocks": 126976, 00:20:24.333 "uuid": "cb952432-856c-42ab-bcb4-30d00202b879", 00:20:24.333 "assigned_rate_limits": { 00:20:24.333 "rw_ios_per_sec": 0, 00:20:24.333 "rw_mbytes_per_sec": 0, 00:20:24.333 "r_mbytes_per_sec": 0, 00:20:24.333 "w_mbytes_per_sec": 0 00:20:24.333 }, 00:20:24.333 "claimed": false, 00:20:24.333 "zoned": false, 00:20:24.333 "supported_io_types": { 00:20:24.333 "read": true, 00:20:24.333 "write": true, 00:20:24.333 "unmap": false, 00:20:24.333 "flush": false, 00:20:24.333 "reset": true, 00:20:24.333 "nvme_admin": false, 00:20:24.333 "nvme_io": false, 00:20:24.333 "nvme_io_md": false, 00:20:24.333 "write_zeroes": true, 00:20:24.333 "zcopy": false, 00:20:24.333 "get_zone_info": false, 00:20:24.333 "zone_management": false, 00:20:24.333 "zone_append": false, 00:20:24.333 "compare": false, 00:20:24.333 "compare_and_write": false, 00:20:24.333 "abort": false, 00:20:24.333 "seek_hole": false, 00:20:24.333 "seek_data": false, 00:20:24.333 "copy": false, 00:20:24.333 "nvme_iov_md": false 00:20:24.333 }, 00:20:24.333 "driver_specific": { 00:20:24.333 "raid": { 00:20:24.333 "uuid": "cb952432-856c-42ab-bcb4-30d00202b879", 00:20:24.333 "strip_size_kb": 64, 00:20:24.333 "state": "online", 00:20:24.333 "raid_level": "raid5f", 00:20:24.333 "superblock": true, 00:20:24.333 "num_base_bdevs": 3, 00:20:24.333 "num_base_bdevs_discovered": 3, 00:20:24.333 "num_base_bdevs_operational": 3, 00:20:24.333 "base_bdevs_list": [ 00:20:24.333 { 00:20:24.333 "name": "pt1", 00:20:24.333 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:24.333 "is_configured": true, 00:20:24.333 "data_offset": 2048, 00:20:24.333 "data_size": 63488 00:20:24.333 }, 00:20:24.333 { 00:20:24.333 "name": "pt2", 00:20:24.333 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:24.333 "is_configured": true, 00:20:24.333 "data_offset": 2048, 00:20:24.333 "data_size": 63488 00:20:24.333 }, 00:20:24.333 { 00:20:24.333 "name": "pt3", 00:20:24.333 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:24.333 "is_configured": true, 00:20:24.333 "data_offset": 2048, 00:20:24.333 "data_size": 63488 00:20:24.333 } 00:20:24.333 ] 00:20:24.333 } 00:20:24.333 } 00:20:24.333 }' 00:20:24.333 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:24.333 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:24.333 pt2 00:20:24.333 pt3' 00:20:24.333 06:46:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.591 [2024-12-06 06:46:43.169730] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' cb952432-856c-42ab-bcb4-30d00202b879 '!=' cb952432-856c-42ab-bcb4-30d00202b879 ']' 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.591 [2024-12-06 06:46:43.217568] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.591 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.849 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.849 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.849 "name": "raid_bdev1", 00:20:24.849 "uuid": "cb952432-856c-42ab-bcb4-30d00202b879", 00:20:24.849 "strip_size_kb": 64, 00:20:24.849 "state": "online", 00:20:24.849 "raid_level": "raid5f", 00:20:24.849 "superblock": true, 00:20:24.849 "num_base_bdevs": 3, 00:20:24.849 "num_base_bdevs_discovered": 2, 00:20:24.849 "num_base_bdevs_operational": 2, 00:20:24.849 "base_bdevs_list": [ 00:20:24.849 { 00:20:24.849 "name": null, 00:20:24.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.849 "is_configured": false, 00:20:24.849 "data_offset": 0, 00:20:24.849 "data_size": 63488 00:20:24.849 }, 00:20:24.849 { 00:20:24.849 "name": "pt2", 00:20:24.849 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:24.849 "is_configured": true, 00:20:24.849 "data_offset": 2048, 00:20:24.849 "data_size": 63488 00:20:24.849 }, 00:20:24.849 { 00:20:24.849 "name": "pt3", 00:20:24.849 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:24.849 "is_configured": true, 00:20:24.850 "data_offset": 2048, 00:20:24.850 "data_size": 63488 00:20:24.850 } 00:20:24.850 ] 00:20:24.850 }' 00:20:24.850 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.850 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.109 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:25.109 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.109 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.109 [2024-12-06 06:46:43.721638] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:25.109 [2024-12-06 06:46:43.721676] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:25.109 [2024-12-06 06:46:43.721770] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:25.109 [2024-12-06 06:46:43.721855] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:25.109 [2024-12-06 06:46:43.721881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:25.109 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.109 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:25.109 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.109 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.109 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.109 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.368 [2024-12-06 06:46:43.797604] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:25.368 [2024-12-06 06:46:43.797672] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:25.368 [2024-12-06 06:46:43.797697] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:20:25.368 [2024-12-06 06:46:43.797713] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:25.368 [2024-12-06 06:46:43.800609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:25.368 [2024-12-06 06:46:43.800661] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:25.368 [2024-12-06 06:46:43.800756] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:25.368 [2024-12-06 06:46:43.800819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:25.368 pt2 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.368 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.368 "name": "raid_bdev1", 00:20:25.368 "uuid": "cb952432-856c-42ab-bcb4-30d00202b879", 00:20:25.368 "strip_size_kb": 64, 00:20:25.368 "state": "configuring", 00:20:25.368 "raid_level": "raid5f", 00:20:25.368 "superblock": true, 00:20:25.368 "num_base_bdevs": 3, 00:20:25.368 "num_base_bdevs_discovered": 1, 00:20:25.368 "num_base_bdevs_operational": 2, 00:20:25.368 "base_bdevs_list": [ 00:20:25.368 { 00:20:25.368 "name": null, 00:20:25.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.369 "is_configured": false, 00:20:25.369 "data_offset": 2048, 00:20:25.369 "data_size": 63488 00:20:25.369 }, 00:20:25.369 { 00:20:25.369 "name": "pt2", 00:20:25.369 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:25.369 "is_configured": true, 00:20:25.369 "data_offset": 2048, 00:20:25.369 "data_size": 63488 00:20:25.369 }, 00:20:25.369 { 00:20:25.369 "name": null, 00:20:25.369 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:25.369 "is_configured": false, 00:20:25.369 "data_offset": 2048, 00:20:25.369 "data_size": 63488 00:20:25.369 } 00:20:25.369 ] 00:20:25.369 }' 00:20:25.369 06:46:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.369 06:46:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.936 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:20:25.936 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:25.936 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:20:25.936 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:25.936 06:46:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.936 06:46:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.936 [2024-12-06 06:46:44.313814] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:25.936 [2024-12-06 06:46:44.313910] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:25.936 [2024-12-06 06:46:44.313945] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:25.936 [2024-12-06 06:46:44.313974] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:25.936 [2024-12-06 06:46:44.314634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:25.936 [2024-12-06 06:46:44.314665] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:25.936 [2024-12-06 06:46:44.314772] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:25.936 [2024-12-06 06:46:44.314814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:25.936 [2024-12-06 06:46:44.315001] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:25.936 [2024-12-06 06:46:44.315023] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:25.936 [2024-12-06 06:46:44.315339] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:25.936 [2024-12-06 06:46:44.320469] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:25.936 [2024-12-06 06:46:44.320627] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:25.936 [2024-12-06 06:46:44.321215] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:25.936 pt3 00:20:25.936 06:46:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.936 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:25.936 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:25.936 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:25.936 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:25.936 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:25.936 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:25.936 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.936 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.936 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.936 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.936 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.936 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.936 06:46:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.936 06:46:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.936 06:46:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.936 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.936 "name": "raid_bdev1", 00:20:25.936 "uuid": "cb952432-856c-42ab-bcb4-30d00202b879", 00:20:25.936 "strip_size_kb": 64, 00:20:25.936 "state": "online", 00:20:25.936 "raid_level": "raid5f", 00:20:25.936 "superblock": true, 00:20:25.936 "num_base_bdevs": 3, 00:20:25.936 "num_base_bdevs_discovered": 2, 00:20:25.936 "num_base_bdevs_operational": 2, 00:20:25.936 "base_bdevs_list": [ 00:20:25.936 { 00:20:25.936 "name": null, 00:20:25.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.936 "is_configured": false, 00:20:25.936 "data_offset": 2048, 00:20:25.936 "data_size": 63488 00:20:25.936 }, 00:20:25.936 { 00:20:25.936 "name": "pt2", 00:20:25.936 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:25.936 "is_configured": true, 00:20:25.936 "data_offset": 2048, 00:20:25.936 "data_size": 63488 00:20:25.936 }, 00:20:25.936 { 00:20:25.936 "name": "pt3", 00:20:25.936 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:25.936 "is_configured": true, 00:20:25.936 "data_offset": 2048, 00:20:25.936 "data_size": 63488 00:20:25.936 } 00:20:25.936 ] 00:20:25.936 }' 00:20:25.936 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.936 06:46:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.196 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:26.196 06:46:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.196 06:46:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.196 [2024-12-06 06:46:44.827088] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:26.196 [2024-12-06 06:46:44.827127] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:26.196 [2024-12-06 06:46:44.827220] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:26.196 [2024-12-06 06:46:44.827307] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:26.196 [2024-12-06 06:46:44.827323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:26.196 06:46:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.196 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.196 06:46:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.196 06:46:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.196 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:26.454 06:46:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.454 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:26.454 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:26.454 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:20:26.454 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:20:26.454 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:20:26.454 06:46:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.454 06:46:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.455 06:46:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.455 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:26.455 06:46:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.455 06:46:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.455 [2024-12-06 06:46:44.899106] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:26.455 [2024-12-06 06:46:44.899307] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:26.455 [2024-12-06 06:46:44.899346] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:26.455 [2024-12-06 06:46:44.899361] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:26.455 [2024-12-06 06:46:44.902198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:26.455 [2024-12-06 06:46:44.902243] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:26.455 [2024-12-06 06:46:44.902344] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:26.455 [2024-12-06 06:46:44.902405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:26.455 [2024-12-06 06:46:44.902731] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:26.455 [2024-12-06 06:46:44.902923] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:26.455 [2024-12-06 06:46:44.903132] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:26.455 [2024-12-06 06:46:44.903230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:26.455 pt1 00:20:26.455 06:46:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.455 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:20:26.455 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:20:26.455 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:26.455 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:26.455 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:26.455 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:26.455 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:26.455 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:26.455 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:26.455 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:26.455 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:26.455 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.455 06:46:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.455 06:46:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.455 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.455 06:46:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.455 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:26.455 "name": "raid_bdev1", 00:20:26.455 "uuid": "cb952432-856c-42ab-bcb4-30d00202b879", 00:20:26.455 "strip_size_kb": 64, 00:20:26.455 "state": "configuring", 00:20:26.455 "raid_level": "raid5f", 00:20:26.455 "superblock": true, 00:20:26.455 "num_base_bdevs": 3, 00:20:26.455 "num_base_bdevs_discovered": 1, 00:20:26.455 "num_base_bdevs_operational": 2, 00:20:26.455 "base_bdevs_list": [ 00:20:26.455 { 00:20:26.455 "name": null, 00:20:26.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.455 "is_configured": false, 00:20:26.455 "data_offset": 2048, 00:20:26.455 "data_size": 63488 00:20:26.455 }, 00:20:26.455 { 00:20:26.455 "name": "pt2", 00:20:26.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:26.455 "is_configured": true, 00:20:26.455 "data_offset": 2048, 00:20:26.455 "data_size": 63488 00:20:26.455 }, 00:20:26.455 { 00:20:26.455 "name": null, 00:20:26.455 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:26.455 "is_configured": false, 00:20:26.455 "data_offset": 2048, 00:20:26.455 "data_size": 63488 00:20:26.455 } 00:20:26.455 ] 00:20:26.455 }' 00:20:26.455 06:46:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:26.455 06:46:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.020 06:46:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:27.020 06:46:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:20:27.020 06:46:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.020 06:46:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.020 06:46:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.020 06:46:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:20:27.020 06:46:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:27.020 06:46:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.020 06:46:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.020 [2024-12-06 06:46:45.487588] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:27.020 [2024-12-06 06:46:45.487664] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.020 [2024-12-06 06:46:45.487697] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:20:27.020 [2024-12-06 06:46:45.487712] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.020 [2024-12-06 06:46:45.488306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.020 [2024-12-06 06:46:45.488348] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:27.020 [2024-12-06 06:46:45.488469] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:27.020 [2024-12-06 06:46:45.488500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:27.020 [2024-12-06 06:46:45.488673] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:27.020 [2024-12-06 06:46:45.488690] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:27.020 [2024-12-06 06:46:45.489003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:27.020 [2024-12-06 06:46:45.494029] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:27.020 [2024-12-06 06:46:45.494181] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:27.020 [2024-12-06 06:46:45.494630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:27.020 pt3 00:20:27.020 06:46:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.020 06:46:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:27.020 06:46:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:27.021 06:46:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:27.021 06:46:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:27.021 06:46:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:27.021 06:46:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:27.021 06:46:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:27.021 06:46:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:27.021 06:46:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:27.021 06:46:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:27.021 06:46:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.021 06:46:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.021 06:46:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.021 06:46:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.021 06:46:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.021 06:46:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:27.021 "name": "raid_bdev1", 00:20:27.021 "uuid": "cb952432-856c-42ab-bcb4-30d00202b879", 00:20:27.021 "strip_size_kb": 64, 00:20:27.021 "state": "online", 00:20:27.021 "raid_level": "raid5f", 00:20:27.021 "superblock": true, 00:20:27.021 "num_base_bdevs": 3, 00:20:27.021 "num_base_bdevs_discovered": 2, 00:20:27.021 "num_base_bdevs_operational": 2, 00:20:27.021 "base_bdevs_list": [ 00:20:27.021 { 00:20:27.021 "name": null, 00:20:27.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.021 "is_configured": false, 00:20:27.021 "data_offset": 2048, 00:20:27.021 "data_size": 63488 00:20:27.021 }, 00:20:27.021 { 00:20:27.021 "name": "pt2", 00:20:27.021 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:27.021 "is_configured": true, 00:20:27.021 "data_offset": 2048, 00:20:27.021 "data_size": 63488 00:20:27.021 }, 00:20:27.021 { 00:20:27.021 "name": "pt3", 00:20:27.021 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:27.021 "is_configured": true, 00:20:27.021 "data_offset": 2048, 00:20:27.021 "data_size": 63488 00:20:27.021 } 00:20:27.021 ] 00:20:27.021 }' 00:20:27.021 06:46:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:27.021 06:46:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.586 06:46:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:27.586 06:46:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.586 06:46:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:27.586 06:46:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.586 06:46:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.586 06:46:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:27.586 06:46:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:27.586 06:46:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:27.587 06:46:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.587 06:46:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.587 [2024-12-06 06:46:46.072700] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:27.587 06:46:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.587 06:46:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' cb952432-856c-42ab-bcb4-30d00202b879 '!=' cb952432-856c-42ab-bcb4-30d00202b879 ']' 00:20:27.587 06:46:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81665 00:20:27.587 06:46:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81665 ']' 00:20:27.587 06:46:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81665 00:20:27.587 06:46:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:20:27.587 06:46:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:27.587 06:46:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81665 00:20:27.587 killing process with pid 81665 00:20:27.587 06:46:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:27.587 06:46:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:27.587 06:46:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81665' 00:20:27.587 06:46:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81665 00:20:27.587 [2024-12-06 06:46:46.150933] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:27.587 06:46:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81665 00:20:27.587 [2024-12-06 06:46:46.151051] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:27.587 [2024-12-06 06:46:46.151134] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:27.587 [2024-12-06 06:46:46.151155] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:27.845 [2024-12-06 06:46:46.420924] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:29.220 06:46:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:20:29.220 00:20:29.220 real 0m8.594s 00:20:29.220 user 0m14.076s 00:20:29.220 sys 0m1.160s 00:20:29.220 06:46:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:29.220 ************************************ 00:20:29.220 END TEST raid5f_superblock_test 00:20:29.220 ************************************ 00:20:29.220 06:46:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.220 06:46:47 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:20:29.220 06:46:47 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:20:29.220 06:46:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:29.220 06:46:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:29.220 06:46:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:29.220 ************************************ 00:20:29.220 START TEST raid5f_rebuild_test 00:20:29.220 ************************************ 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82121 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82121 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 82121 ']' 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.220 06:46:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.220 [2024-12-06 06:46:47.639020] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:20:29.220 [2024-12-06 06:46:47.639425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82121 ] 00:20:29.220 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:29.220 Zero copy mechanism will not be used. 00:20:29.220 [2024-12-06 06:46:47.823508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.479 [2024-12-06 06:46:47.951939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.738 [2024-12-06 06:46:48.154569] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:29.738 [2024-12-06 06:46:48.154865] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:29.998 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:29.998 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:20:29.998 06:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:29.998 06:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:29.998 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.998 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.998 BaseBdev1_malloc 00:20:29.998 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.998 06:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:29.998 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.998 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.260 [2024-12-06 06:46:48.647462] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:30.260 [2024-12-06 06:46:48.647557] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.260 [2024-12-06 06:46:48.647593] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:30.260 [2024-12-06 06:46:48.647611] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.260 [2024-12-06 06:46:48.650356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.260 [2024-12-06 06:46:48.650549] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:30.260 BaseBdev1 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.260 BaseBdev2_malloc 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.260 [2024-12-06 06:46:48.695600] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:30.260 [2024-12-06 06:46:48.695679] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.260 [2024-12-06 06:46:48.695713] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:30.260 [2024-12-06 06:46:48.695731] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.260 [2024-12-06 06:46:48.698464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.260 [2024-12-06 06:46:48.698662] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:30.260 BaseBdev2 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.260 BaseBdev3_malloc 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.260 [2024-12-06 06:46:48.761721] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:30.260 [2024-12-06 06:46:48.761933] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.260 [2024-12-06 06:46:48.761977] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:30.260 [2024-12-06 06:46:48.761998] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.260 [2024-12-06 06:46:48.765311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.260 [2024-12-06 06:46:48.765363] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:30.260 BaseBdev3 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.260 spare_malloc 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.260 spare_delay 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.260 [2024-12-06 06:46:48.821943] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:30.260 [2024-12-06 06:46:48.822015] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.260 [2024-12-06 06:46:48.822043] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:30.260 [2024-12-06 06:46:48.822060] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.260 [2024-12-06 06:46:48.824867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.260 [2024-12-06 06:46:48.825041] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:30.260 spare 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.260 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.261 [2024-12-06 06:46:48.830024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:30.261 [2024-12-06 06:46:48.832400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:30.261 [2024-12-06 06:46:48.832631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:30.261 [2024-12-06 06:46:48.832768] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:30.261 [2024-12-06 06:46:48.832787] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:20:30.261 [2024-12-06 06:46:48.833130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:30.261 [2024-12-06 06:46:48.838244] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:30.261 [2024-12-06 06:46:48.838276] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:30.261 [2024-12-06 06:46:48.838563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:30.261 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.261 06:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:30.261 06:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:30.261 06:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:30.261 06:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:30.261 06:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:30.261 06:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:30.261 06:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.261 06:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.261 06:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.261 06:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.261 06:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.261 06:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.261 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.261 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.261 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.261 06:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.261 "name": "raid_bdev1", 00:20:30.261 "uuid": "ab7ab8e5-4b85-4eb7-bdc9-5196d3b312dd", 00:20:30.261 "strip_size_kb": 64, 00:20:30.261 "state": "online", 00:20:30.261 "raid_level": "raid5f", 00:20:30.261 "superblock": false, 00:20:30.261 "num_base_bdevs": 3, 00:20:30.261 "num_base_bdevs_discovered": 3, 00:20:30.261 "num_base_bdevs_operational": 3, 00:20:30.261 "base_bdevs_list": [ 00:20:30.261 { 00:20:30.261 "name": "BaseBdev1", 00:20:30.261 "uuid": "f2d7cf11-7f22-5620-82c0-2da1582088cf", 00:20:30.261 "is_configured": true, 00:20:30.261 "data_offset": 0, 00:20:30.261 "data_size": 65536 00:20:30.261 }, 00:20:30.261 { 00:20:30.261 "name": "BaseBdev2", 00:20:30.261 "uuid": "263693a7-f2a5-5668-8f2a-9e841ed5b44c", 00:20:30.261 "is_configured": true, 00:20:30.261 "data_offset": 0, 00:20:30.261 "data_size": 65536 00:20:30.261 }, 00:20:30.261 { 00:20:30.261 "name": "BaseBdev3", 00:20:30.261 "uuid": "58d30d34-bbcd-5e4f-add9-84fe53ab4d8f", 00:20:30.261 "is_configured": true, 00:20:30.261 "data_offset": 0, 00:20:30.261 "data_size": 65536 00:20:30.261 } 00:20:30.261 ] 00:20:30.261 }' 00:20:30.261 06:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.261 06:46:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.829 06:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:30.829 06:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:30.829 06:46:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.829 06:46:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.829 [2024-12-06 06:46:49.376648] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:30.829 06:46:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.829 06:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:20:30.829 06:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.829 06:46:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.829 06:46:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.829 06:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:30.829 06:46:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.088 06:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:20:31.088 06:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:31.088 06:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:31.088 06:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:31.088 06:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:31.088 06:46:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:31.088 06:46:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:31.088 06:46:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:31.088 06:46:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:31.088 06:46:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:31.088 06:46:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:31.088 06:46:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:31.088 06:46:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:31.088 06:46:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:31.349 [2024-12-06 06:46:49.788569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:31.349 /dev/nbd0 00:20:31.349 06:46:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:31.349 06:46:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:31.349 06:46:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:31.349 06:46:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:31.349 06:46:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:31.349 06:46:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:31.349 06:46:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:31.349 06:46:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:31.349 06:46:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:31.349 06:46:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:31.349 06:46:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:31.349 1+0 records in 00:20:31.349 1+0 records out 00:20:31.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000716412 s, 5.7 MB/s 00:20:31.349 06:46:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:31.349 06:46:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:31.349 06:46:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:31.349 06:46:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:31.349 06:46:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:31.349 06:46:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:31.349 06:46:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:31.349 06:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:20:31.349 06:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:20:31.349 06:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:20:31.349 06:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:20:31.935 512+0 records in 00:20:31.935 512+0 records out 00:20:31.935 67108864 bytes (67 MB, 64 MiB) copied, 0.501171 s, 134 MB/s 00:20:31.935 06:46:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:31.935 06:46:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:31.935 06:46:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:31.935 06:46:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:31.935 06:46:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:31.935 06:46:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:31.935 06:46:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:32.195 [2024-12-06 06:46:50.677680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:32.195 06:46:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:32.195 06:46:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:32.195 06:46:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:32.195 06:46:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:32.195 06:46:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:32.195 06:46:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:32.195 06:46:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:32.195 06:46:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:32.195 06:46:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:32.195 06:46:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.195 06:46:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.195 [2024-12-06 06:46:50.707427] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:32.195 06:46:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.195 06:46:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:32.195 06:46:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:32.195 06:46:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:32.195 06:46:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:32.195 06:46:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:32.195 06:46:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:32.195 06:46:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:32.195 06:46:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:32.195 06:46:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:32.195 06:46:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:32.195 06:46:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.195 06:46:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.195 06:46:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.195 06:46:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.195 06:46:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.196 06:46:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:32.196 "name": "raid_bdev1", 00:20:32.196 "uuid": "ab7ab8e5-4b85-4eb7-bdc9-5196d3b312dd", 00:20:32.196 "strip_size_kb": 64, 00:20:32.196 "state": "online", 00:20:32.196 "raid_level": "raid5f", 00:20:32.196 "superblock": false, 00:20:32.196 "num_base_bdevs": 3, 00:20:32.196 "num_base_bdevs_discovered": 2, 00:20:32.196 "num_base_bdevs_operational": 2, 00:20:32.196 "base_bdevs_list": [ 00:20:32.196 { 00:20:32.196 "name": null, 00:20:32.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.196 "is_configured": false, 00:20:32.196 "data_offset": 0, 00:20:32.196 "data_size": 65536 00:20:32.196 }, 00:20:32.196 { 00:20:32.196 "name": "BaseBdev2", 00:20:32.196 "uuid": "263693a7-f2a5-5668-8f2a-9e841ed5b44c", 00:20:32.196 "is_configured": true, 00:20:32.196 "data_offset": 0, 00:20:32.196 "data_size": 65536 00:20:32.196 }, 00:20:32.196 { 00:20:32.196 "name": "BaseBdev3", 00:20:32.196 "uuid": "58d30d34-bbcd-5e4f-add9-84fe53ab4d8f", 00:20:32.196 "is_configured": true, 00:20:32.196 "data_offset": 0, 00:20:32.196 "data_size": 65536 00:20:32.196 } 00:20:32.196 ] 00:20:32.196 }' 00:20:32.196 06:46:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:32.196 06:46:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.762 06:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:32.762 06:46:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.762 06:46:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.762 [2024-12-06 06:46:51.211620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:32.762 [2024-12-06 06:46:51.227224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:20:32.762 06:46:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.762 06:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:32.762 [2024-12-06 06:46:51.234813] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:33.699 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:33.699 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:33.699 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:33.699 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:33.699 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:33.699 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.699 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.699 06:46:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.699 06:46:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.699 06:46:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.699 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:33.699 "name": "raid_bdev1", 00:20:33.699 "uuid": "ab7ab8e5-4b85-4eb7-bdc9-5196d3b312dd", 00:20:33.699 "strip_size_kb": 64, 00:20:33.699 "state": "online", 00:20:33.699 "raid_level": "raid5f", 00:20:33.699 "superblock": false, 00:20:33.699 "num_base_bdevs": 3, 00:20:33.699 "num_base_bdevs_discovered": 3, 00:20:33.699 "num_base_bdevs_operational": 3, 00:20:33.699 "process": { 00:20:33.699 "type": "rebuild", 00:20:33.699 "target": "spare", 00:20:33.699 "progress": { 00:20:33.699 "blocks": 18432, 00:20:33.699 "percent": 14 00:20:33.699 } 00:20:33.699 }, 00:20:33.699 "base_bdevs_list": [ 00:20:33.699 { 00:20:33.699 "name": "spare", 00:20:33.699 "uuid": "912a640d-6244-58ce-b983-de89f191032f", 00:20:33.699 "is_configured": true, 00:20:33.699 "data_offset": 0, 00:20:33.699 "data_size": 65536 00:20:33.699 }, 00:20:33.699 { 00:20:33.699 "name": "BaseBdev2", 00:20:33.699 "uuid": "263693a7-f2a5-5668-8f2a-9e841ed5b44c", 00:20:33.699 "is_configured": true, 00:20:33.699 "data_offset": 0, 00:20:33.699 "data_size": 65536 00:20:33.699 }, 00:20:33.699 { 00:20:33.699 "name": "BaseBdev3", 00:20:33.699 "uuid": "58d30d34-bbcd-5e4f-add9-84fe53ab4d8f", 00:20:33.699 "is_configured": true, 00:20:33.699 "data_offset": 0, 00:20:33.699 "data_size": 65536 00:20:33.699 } 00:20:33.699 ] 00:20:33.699 }' 00:20:33.699 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:33.699 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:33.699 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:33.959 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:33.959 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:33.959 06:46:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.959 06:46:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.959 [2024-12-06 06:46:52.405169] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:33.959 [2024-12-06 06:46:52.450732] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:33.959 [2024-12-06 06:46:52.450838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:33.959 [2024-12-06 06:46:52.450869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:33.959 [2024-12-06 06:46:52.450900] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:33.959 06:46:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.959 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:33.959 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:33.959 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:33.959 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:33.959 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:33.959 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:33.959 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:33.959 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:33.959 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:33.959 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:33.959 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.959 06:46:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.959 06:46:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.959 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.959 06:46:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.959 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:33.959 "name": "raid_bdev1", 00:20:33.959 "uuid": "ab7ab8e5-4b85-4eb7-bdc9-5196d3b312dd", 00:20:33.959 "strip_size_kb": 64, 00:20:33.959 "state": "online", 00:20:33.959 "raid_level": "raid5f", 00:20:33.959 "superblock": false, 00:20:33.959 "num_base_bdevs": 3, 00:20:33.959 "num_base_bdevs_discovered": 2, 00:20:33.959 "num_base_bdevs_operational": 2, 00:20:33.959 "base_bdevs_list": [ 00:20:33.959 { 00:20:33.959 "name": null, 00:20:33.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.959 "is_configured": false, 00:20:33.959 "data_offset": 0, 00:20:33.959 "data_size": 65536 00:20:33.959 }, 00:20:33.959 { 00:20:33.959 "name": "BaseBdev2", 00:20:33.959 "uuid": "263693a7-f2a5-5668-8f2a-9e841ed5b44c", 00:20:33.959 "is_configured": true, 00:20:33.960 "data_offset": 0, 00:20:33.960 "data_size": 65536 00:20:33.960 }, 00:20:33.960 { 00:20:33.960 "name": "BaseBdev3", 00:20:33.960 "uuid": "58d30d34-bbcd-5e4f-add9-84fe53ab4d8f", 00:20:33.960 "is_configured": true, 00:20:33.960 "data_offset": 0, 00:20:33.960 "data_size": 65536 00:20:33.960 } 00:20:33.960 ] 00:20:33.960 }' 00:20:33.960 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:33.960 06:46:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.527 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:34.527 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:34.527 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:34.527 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:34.527 06:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:34.527 06:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.527 06:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.527 06:46:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.527 06:46:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.527 06:46:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.527 06:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:34.527 "name": "raid_bdev1", 00:20:34.527 "uuid": "ab7ab8e5-4b85-4eb7-bdc9-5196d3b312dd", 00:20:34.527 "strip_size_kb": 64, 00:20:34.527 "state": "online", 00:20:34.527 "raid_level": "raid5f", 00:20:34.527 "superblock": false, 00:20:34.527 "num_base_bdevs": 3, 00:20:34.527 "num_base_bdevs_discovered": 2, 00:20:34.527 "num_base_bdevs_operational": 2, 00:20:34.527 "base_bdevs_list": [ 00:20:34.527 { 00:20:34.527 "name": null, 00:20:34.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.527 "is_configured": false, 00:20:34.527 "data_offset": 0, 00:20:34.527 "data_size": 65536 00:20:34.527 }, 00:20:34.527 { 00:20:34.527 "name": "BaseBdev2", 00:20:34.527 "uuid": "263693a7-f2a5-5668-8f2a-9e841ed5b44c", 00:20:34.527 "is_configured": true, 00:20:34.527 "data_offset": 0, 00:20:34.527 "data_size": 65536 00:20:34.527 }, 00:20:34.527 { 00:20:34.527 "name": "BaseBdev3", 00:20:34.527 "uuid": "58d30d34-bbcd-5e4f-add9-84fe53ab4d8f", 00:20:34.527 "is_configured": true, 00:20:34.527 "data_offset": 0, 00:20:34.527 "data_size": 65536 00:20:34.527 } 00:20:34.527 ] 00:20:34.527 }' 00:20:34.527 06:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:34.527 06:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:34.527 06:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:34.527 06:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:34.527 06:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:34.527 06:46:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.527 06:46:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.528 [2024-12-06 06:46:53.171453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:34.786 [2024-12-06 06:46:53.185823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:20:34.786 06:46:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.786 06:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:34.786 [2024-12-06 06:46:53.193049] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:35.722 06:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:35.722 06:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:35.722 06:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:35.722 06:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:35.722 06:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:35.722 06:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.722 06:46:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.722 06:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.722 06:46:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.722 06:46:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.722 06:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:35.722 "name": "raid_bdev1", 00:20:35.722 "uuid": "ab7ab8e5-4b85-4eb7-bdc9-5196d3b312dd", 00:20:35.722 "strip_size_kb": 64, 00:20:35.722 "state": "online", 00:20:35.722 "raid_level": "raid5f", 00:20:35.722 "superblock": false, 00:20:35.722 "num_base_bdevs": 3, 00:20:35.722 "num_base_bdevs_discovered": 3, 00:20:35.722 "num_base_bdevs_operational": 3, 00:20:35.722 "process": { 00:20:35.722 "type": "rebuild", 00:20:35.722 "target": "spare", 00:20:35.722 "progress": { 00:20:35.722 "blocks": 18432, 00:20:35.722 "percent": 14 00:20:35.722 } 00:20:35.722 }, 00:20:35.722 "base_bdevs_list": [ 00:20:35.722 { 00:20:35.722 "name": "spare", 00:20:35.722 "uuid": "912a640d-6244-58ce-b983-de89f191032f", 00:20:35.722 "is_configured": true, 00:20:35.722 "data_offset": 0, 00:20:35.722 "data_size": 65536 00:20:35.722 }, 00:20:35.722 { 00:20:35.722 "name": "BaseBdev2", 00:20:35.722 "uuid": "263693a7-f2a5-5668-8f2a-9e841ed5b44c", 00:20:35.722 "is_configured": true, 00:20:35.722 "data_offset": 0, 00:20:35.722 "data_size": 65536 00:20:35.722 }, 00:20:35.722 { 00:20:35.722 "name": "BaseBdev3", 00:20:35.722 "uuid": "58d30d34-bbcd-5e4f-add9-84fe53ab4d8f", 00:20:35.722 "is_configured": true, 00:20:35.722 "data_offset": 0, 00:20:35.722 "data_size": 65536 00:20:35.722 } 00:20:35.722 ] 00:20:35.722 }' 00:20:35.722 06:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:35.722 06:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:35.722 06:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:35.722 06:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:35.722 06:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:20:35.722 06:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:20:35.722 06:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:20:35.722 06:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=594 00:20:35.722 06:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:35.722 06:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:35.722 06:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:35.722 06:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:35.722 06:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:35.722 06:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:35.722 06:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.722 06:46:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.722 06:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.722 06:46:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.981 06:46:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.981 06:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:35.981 "name": "raid_bdev1", 00:20:35.981 "uuid": "ab7ab8e5-4b85-4eb7-bdc9-5196d3b312dd", 00:20:35.981 "strip_size_kb": 64, 00:20:35.981 "state": "online", 00:20:35.981 "raid_level": "raid5f", 00:20:35.981 "superblock": false, 00:20:35.981 "num_base_bdevs": 3, 00:20:35.981 "num_base_bdevs_discovered": 3, 00:20:35.981 "num_base_bdevs_operational": 3, 00:20:35.981 "process": { 00:20:35.981 "type": "rebuild", 00:20:35.981 "target": "spare", 00:20:35.981 "progress": { 00:20:35.981 "blocks": 22528, 00:20:35.981 "percent": 17 00:20:35.981 } 00:20:35.981 }, 00:20:35.981 "base_bdevs_list": [ 00:20:35.981 { 00:20:35.981 "name": "spare", 00:20:35.981 "uuid": "912a640d-6244-58ce-b983-de89f191032f", 00:20:35.981 "is_configured": true, 00:20:35.981 "data_offset": 0, 00:20:35.981 "data_size": 65536 00:20:35.981 }, 00:20:35.981 { 00:20:35.981 "name": "BaseBdev2", 00:20:35.981 "uuid": "263693a7-f2a5-5668-8f2a-9e841ed5b44c", 00:20:35.981 "is_configured": true, 00:20:35.981 "data_offset": 0, 00:20:35.981 "data_size": 65536 00:20:35.981 }, 00:20:35.981 { 00:20:35.981 "name": "BaseBdev3", 00:20:35.981 "uuid": "58d30d34-bbcd-5e4f-add9-84fe53ab4d8f", 00:20:35.981 "is_configured": true, 00:20:35.981 "data_offset": 0, 00:20:35.981 "data_size": 65536 00:20:35.981 } 00:20:35.981 ] 00:20:35.981 }' 00:20:35.981 06:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:35.981 06:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:35.981 06:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:35.981 06:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:35.981 06:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:36.917 06:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:36.917 06:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:36.917 06:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:36.917 06:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:36.917 06:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:36.917 06:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:36.917 06:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.917 06:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.917 06:46:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.917 06:46:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.917 06:46:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.174 06:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:37.174 "name": "raid_bdev1", 00:20:37.174 "uuid": "ab7ab8e5-4b85-4eb7-bdc9-5196d3b312dd", 00:20:37.174 "strip_size_kb": 64, 00:20:37.174 "state": "online", 00:20:37.174 "raid_level": "raid5f", 00:20:37.174 "superblock": false, 00:20:37.174 "num_base_bdevs": 3, 00:20:37.174 "num_base_bdevs_discovered": 3, 00:20:37.174 "num_base_bdevs_operational": 3, 00:20:37.174 "process": { 00:20:37.174 "type": "rebuild", 00:20:37.174 "target": "spare", 00:20:37.174 "progress": { 00:20:37.174 "blocks": 47104, 00:20:37.174 "percent": 35 00:20:37.174 } 00:20:37.174 }, 00:20:37.174 "base_bdevs_list": [ 00:20:37.174 { 00:20:37.174 "name": "spare", 00:20:37.174 "uuid": "912a640d-6244-58ce-b983-de89f191032f", 00:20:37.174 "is_configured": true, 00:20:37.174 "data_offset": 0, 00:20:37.174 "data_size": 65536 00:20:37.174 }, 00:20:37.174 { 00:20:37.174 "name": "BaseBdev2", 00:20:37.174 "uuid": "263693a7-f2a5-5668-8f2a-9e841ed5b44c", 00:20:37.174 "is_configured": true, 00:20:37.174 "data_offset": 0, 00:20:37.174 "data_size": 65536 00:20:37.174 }, 00:20:37.174 { 00:20:37.174 "name": "BaseBdev3", 00:20:37.174 "uuid": "58d30d34-bbcd-5e4f-add9-84fe53ab4d8f", 00:20:37.174 "is_configured": true, 00:20:37.174 "data_offset": 0, 00:20:37.174 "data_size": 65536 00:20:37.174 } 00:20:37.174 ] 00:20:37.174 }' 00:20:37.174 06:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:37.174 06:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:37.174 06:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:37.174 06:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:37.174 06:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:38.108 06:46:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:38.108 06:46:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:38.108 06:46:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:38.108 06:46:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:38.108 06:46:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:38.108 06:46:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:38.108 06:46:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.108 06:46:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.108 06:46:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.108 06:46:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.108 06:46:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.108 06:46:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:38.108 "name": "raid_bdev1", 00:20:38.108 "uuid": "ab7ab8e5-4b85-4eb7-bdc9-5196d3b312dd", 00:20:38.108 "strip_size_kb": 64, 00:20:38.108 "state": "online", 00:20:38.108 "raid_level": "raid5f", 00:20:38.108 "superblock": false, 00:20:38.108 "num_base_bdevs": 3, 00:20:38.108 "num_base_bdevs_discovered": 3, 00:20:38.108 "num_base_bdevs_operational": 3, 00:20:38.108 "process": { 00:20:38.108 "type": "rebuild", 00:20:38.108 "target": "spare", 00:20:38.108 "progress": { 00:20:38.108 "blocks": 69632, 00:20:38.108 "percent": 53 00:20:38.108 } 00:20:38.108 }, 00:20:38.108 "base_bdevs_list": [ 00:20:38.108 { 00:20:38.108 "name": "spare", 00:20:38.108 "uuid": "912a640d-6244-58ce-b983-de89f191032f", 00:20:38.109 "is_configured": true, 00:20:38.109 "data_offset": 0, 00:20:38.109 "data_size": 65536 00:20:38.109 }, 00:20:38.109 { 00:20:38.109 "name": "BaseBdev2", 00:20:38.109 "uuid": "263693a7-f2a5-5668-8f2a-9e841ed5b44c", 00:20:38.109 "is_configured": true, 00:20:38.109 "data_offset": 0, 00:20:38.109 "data_size": 65536 00:20:38.109 }, 00:20:38.109 { 00:20:38.109 "name": "BaseBdev3", 00:20:38.109 "uuid": "58d30d34-bbcd-5e4f-add9-84fe53ab4d8f", 00:20:38.109 "is_configured": true, 00:20:38.109 "data_offset": 0, 00:20:38.109 "data_size": 65536 00:20:38.109 } 00:20:38.109 ] 00:20:38.109 }' 00:20:38.109 06:46:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:38.367 06:46:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:38.367 06:46:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:38.367 06:46:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:38.367 06:46:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:39.302 06:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:39.302 06:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:39.302 06:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:39.302 06:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:39.302 06:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:39.302 06:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:39.302 06:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.302 06:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.302 06:46:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.302 06:46:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.302 06:46:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.302 06:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:39.302 "name": "raid_bdev1", 00:20:39.302 "uuid": "ab7ab8e5-4b85-4eb7-bdc9-5196d3b312dd", 00:20:39.302 "strip_size_kb": 64, 00:20:39.302 "state": "online", 00:20:39.302 "raid_level": "raid5f", 00:20:39.302 "superblock": false, 00:20:39.302 "num_base_bdevs": 3, 00:20:39.302 "num_base_bdevs_discovered": 3, 00:20:39.302 "num_base_bdevs_operational": 3, 00:20:39.302 "process": { 00:20:39.302 "type": "rebuild", 00:20:39.302 "target": "spare", 00:20:39.302 "progress": { 00:20:39.302 "blocks": 94208, 00:20:39.302 "percent": 71 00:20:39.302 } 00:20:39.302 }, 00:20:39.302 "base_bdevs_list": [ 00:20:39.302 { 00:20:39.302 "name": "spare", 00:20:39.302 "uuid": "912a640d-6244-58ce-b983-de89f191032f", 00:20:39.302 "is_configured": true, 00:20:39.302 "data_offset": 0, 00:20:39.302 "data_size": 65536 00:20:39.302 }, 00:20:39.302 { 00:20:39.302 "name": "BaseBdev2", 00:20:39.302 "uuid": "263693a7-f2a5-5668-8f2a-9e841ed5b44c", 00:20:39.302 "is_configured": true, 00:20:39.302 "data_offset": 0, 00:20:39.302 "data_size": 65536 00:20:39.302 }, 00:20:39.302 { 00:20:39.302 "name": "BaseBdev3", 00:20:39.302 "uuid": "58d30d34-bbcd-5e4f-add9-84fe53ab4d8f", 00:20:39.302 "is_configured": true, 00:20:39.302 "data_offset": 0, 00:20:39.302 "data_size": 65536 00:20:39.302 } 00:20:39.302 ] 00:20:39.302 }' 00:20:39.302 06:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:39.561 06:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:39.561 06:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:39.561 06:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:39.561 06:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:40.498 06:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:40.498 06:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:40.498 06:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:40.498 06:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:40.498 06:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:40.498 06:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:40.498 06:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.498 06:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.498 06:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.498 06:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.498 06:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.498 06:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:40.498 "name": "raid_bdev1", 00:20:40.498 "uuid": "ab7ab8e5-4b85-4eb7-bdc9-5196d3b312dd", 00:20:40.498 "strip_size_kb": 64, 00:20:40.498 "state": "online", 00:20:40.498 "raid_level": "raid5f", 00:20:40.498 "superblock": false, 00:20:40.498 "num_base_bdevs": 3, 00:20:40.498 "num_base_bdevs_discovered": 3, 00:20:40.498 "num_base_bdevs_operational": 3, 00:20:40.498 "process": { 00:20:40.498 "type": "rebuild", 00:20:40.498 "target": "spare", 00:20:40.498 "progress": { 00:20:40.498 "blocks": 116736, 00:20:40.498 "percent": 89 00:20:40.498 } 00:20:40.498 }, 00:20:40.498 "base_bdevs_list": [ 00:20:40.498 { 00:20:40.498 "name": "spare", 00:20:40.498 "uuid": "912a640d-6244-58ce-b983-de89f191032f", 00:20:40.498 "is_configured": true, 00:20:40.498 "data_offset": 0, 00:20:40.498 "data_size": 65536 00:20:40.498 }, 00:20:40.498 { 00:20:40.498 "name": "BaseBdev2", 00:20:40.498 "uuid": "263693a7-f2a5-5668-8f2a-9e841ed5b44c", 00:20:40.498 "is_configured": true, 00:20:40.498 "data_offset": 0, 00:20:40.498 "data_size": 65536 00:20:40.498 }, 00:20:40.498 { 00:20:40.498 "name": "BaseBdev3", 00:20:40.498 "uuid": "58d30d34-bbcd-5e4f-add9-84fe53ab4d8f", 00:20:40.498 "is_configured": true, 00:20:40.498 "data_offset": 0, 00:20:40.498 "data_size": 65536 00:20:40.498 } 00:20:40.498 ] 00:20:40.498 }' 00:20:40.498 06:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:40.499 06:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:40.499 06:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:40.757 06:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:40.757 06:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:41.325 [2024-12-06 06:46:59.674100] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:41.325 [2024-12-06 06:46:59.674234] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:41.325 [2024-12-06 06:46:59.674301] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:41.584 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:41.584 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:41.584 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:41.584 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:41.584 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:41.584 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:41.584 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.584 06:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.584 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.584 06:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.584 06:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.842 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:41.842 "name": "raid_bdev1", 00:20:41.842 "uuid": "ab7ab8e5-4b85-4eb7-bdc9-5196d3b312dd", 00:20:41.842 "strip_size_kb": 64, 00:20:41.842 "state": "online", 00:20:41.842 "raid_level": "raid5f", 00:20:41.842 "superblock": false, 00:20:41.842 "num_base_bdevs": 3, 00:20:41.842 "num_base_bdevs_discovered": 3, 00:20:41.842 "num_base_bdevs_operational": 3, 00:20:41.842 "base_bdevs_list": [ 00:20:41.842 { 00:20:41.842 "name": "spare", 00:20:41.842 "uuid": "912a640d-6244-58ce-b983-de89f191032f", 00:20:41.842 "is_configured": true, 00:20:41.842 "data_offset": 0, 00:20:41.842 "data_size": 65536 00:20:41.842 }, 00:20:41.842 { 00:20:41.842 "name": "BaseBdev2", 00:20:41.842 "uuid": "263693a7-f2a5-5668-8f2a-9e841ed5b44c", 00:20:41.842 "is_configured": true, 00:20:41.842 "data_offset": 0, 00:20:41.842 "data_size": 65536 00:20:41.842 }, 00:20:41.842 { 00:20:41.842 "name": "BaseBdev3", 00:20:41.842 "uuid": "58d30d34-bbcd-5e4f-add9-84fe53ab4d8f", 00:20:41.842 "is_configured": true, 00:20:41.843 "data_offset": 0, 00:20:41.843 "data_size": 65536 00:20:41.843 } 00:20:41.843 ] 00:20:41.843 }' 00:20:41.843 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:41.843 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:41.843 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:41.843 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:41.843 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:20:41.843 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:41.843 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:41.843 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:41.843 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:41.843 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:41.843 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.843 06:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.843 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.843 06:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.843 06:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.843 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:41.843 "name": "raid_bdev1", 00:20:41.843 "uuid": "ab7ab8e5-4b85-4eb7-bdc9-5196d3b312dd", 00:20:41.843 "strip_size_kb": 64, 00:20:41.843 "state": "online", 00:20:41.843 "raid_level": "raid5f", 00:20:41.843 "superblock": false, 00:20:41.843 "num_base_bdevs": 3, 00:20:41.843 "num_base_bdevs_discovered": 3, 00:20:41.843 "num_base_bdevs_operational": 3, 00:20:41.843 "base_bdevs_list": [ 00:20:41.843 { 00:20:41.843 "name": "spare", 00:20:41.843 "uuid": "912a640d-6244-58ce-b983-de89f191032f", 00:20:41.843 "is_configured": true, 00:20:41.843 "data_offset": 0, 00:20:41.843 "data_size": 65536 00:20:41.843 }, 00:20:41.843 { 00:20:41.843 "name": "BaseBdev2", 00:20:41.843 "uuid": "263693a7-f2a5-5668-8f2a-9e841ed5b44c", 00:20:41.843 "is_configured": true, 00:20:41.843 "data_offset": 0, 00:20:41.843 "data_size": 65536 00:20:41.843 }, 00:20:41.843 { 00:20:41.843 "name": "BaseBdev3", 00:20:41.843 "uuid": "58d30d34-bbcd-5e4f-add9-84fe53ab4d8f", 00:20:41.843 "is_configured": true, 00:20:41.843 "data_offset": 0, 00:20:41.843 "data_size": 65536 00:20:41.843 } 00:20:41.843 ] 00:20:41.843 }' 00:20:41.843 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:41.843 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:41.843 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:42.102 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:42.102 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:42.102 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:42.102 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:42.102 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:42.102 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:42.102 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:42.102 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:42.102 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:42.102 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:42.102 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:42.102 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.102 06:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.102 06:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.102 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.102 06:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.102 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:42.102 "name": "raid_bdev1", 00:20:42.102 "uuid": "ab7ab8e5-4b85-4eb7-bdc9-5196d3b312dd", 00:20:42.102 "strip_size_kb": 64, 00:20:42.102 "state": "online", 00:20:42.102 "raid_level": "raid5f", 00:20:42.102 "superblock": false, 00:20:42.102 "num_base_bdevs": 3, 00:20:42.102 "num_base_bdevs_discovered": 3, 00:20:42.102 "num_base_bdevs_operational": 3, 00:20:42.102 "base_bdevs_list": [ 00:20:42.102 { 00:20:42.102 "name": "spare", 00:20:42.102 "uuid": "912a640d-6244-58ce-b983-de89f191032f", 00:20:42.102 "is_configured": true, 00:20:42.102 "data_offset": 0, 00:20:42.102 "data_size": 65536 00:20:42.102 }, 00:20:42.102 { 00:20:42.102 "name": "BaseBdev2", 00:20:42.102 "uuid": "263693a7-f2a5-5668-8f2a-9e841ed5b44c", 00:20:42.102 "is_configured": true, 00:20:42.102 "data_offset": 0, 00:20:42.102 "data_size": 65536 00:20:42.103 }, 00:20:42.103 { 00:20:42.103 "name": "BaseBdev3", 00:20:42.103 "uuid": "58d30d34-bbcd-5e4f-add9-84fe53ab4d8f", 00:20:42.103 "is_configured": true, 00:20:42.103 "data_offset": 0, 00:20:42.103 "data_size": 65536 00:20:42.103 } 00:20:42.103 ] 00:20:42.103 }' 00:20:42.103 06:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:42.103 06:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.670 06:47:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:42.670 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.670 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.670 [2024-12-06 06:47:01.041952] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:42.670 [2024-12-06 06:47:01.042116] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:42.670 [2024-12-06 06:47:01.042332] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:42.670 [2024-12-06 06:47:01.042568] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:42.670 [2024-12-06 06:47:01.042737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:42.670 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.670 06:47:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.670 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.670 06:47:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:20:42.670 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.670 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.670 06:47:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:42.670 06:47:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:42.670 06:47:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:42.670 06:47:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:42.670 06:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:42.670 06:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:42.670 06:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:42.670 06:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:42.670 06:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:42.670 06:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:42.670 06:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:42.670 06:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:42.670 06:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:42.930 /dev/nbd0 00:20:42.930 06:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:42.930 06:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:42.930 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:42.930 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:42.930 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:42.930 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:42.930 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:42.930 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:42.930 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:42.930 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:42.930 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:42.930 1+0 records in 00:20:42.930 1+0 records out 00:20:42.930 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000513701 s, 8.0 MB/s 00:20:42.930 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:42.930 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:42.930 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:42.930 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:42.930 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:42.930 06:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:42.930 06:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:42.930 06:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:43.189 /dev/nbd1 00:20:43.189 06:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:43.189 06:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:43.189 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:43.189 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:43.189 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:43.189 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:43.189 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:43.189 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:43.189 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:43.189 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:43.189 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:43.189 1+0 records in 00:20:43.189 1+0 records out 00:20:43.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420861 s, 9.7 MB/s 00:20:43.189 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:43.447 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:43.447 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:43.447 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:43.447 06:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:43.447 06:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:43.447 06:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:43.447 06:47:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:43.447 06:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:43.447 06:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:43.447 06:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:43.447 06:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:43.447 06:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:43.447 06:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:43.447 06:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:44.014 06:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:44.014 06:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:44.014 06:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:44.014 06:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:44.014 06:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:44.014 06:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:44.014 06:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:44.014 06:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:44.014 06:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:44.014 06:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:44.273 06:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:44.273 06:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:44.273 06:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:44.273 06:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:44.273 06:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:44.273 06:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:44.273 06:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:44.273 06:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:44.273 06:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:20:44.273 06:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82121 00:20:44.273 06:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 82121 ']' 00:20:44.273 06:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 82121 00:20:44.273 06:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:20:44.273 06:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:44.273 06:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82121 00:20:44.273 killing process with pid 82121 00:20:44.273 Received shutdown signal, test time was about 60.000000 seconds 00:20:44.273 00:20:44.273 Latency(us) 00:20:44.273 [2024-12-06T06:47:02.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.273 [2024-12-06T06:47:02.920Z] =================================================================================================================== 00:20:44.273 [2024-12-06T06:47:02.920Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:44.273 06:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:44.273 06:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:44.273 06:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82121' 00:20:44.273 06:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 82121 00:20:44.273 [2024-12-06 06:47:02.777621] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:44.273 06:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 82121 00:20:44.540 [2024-12-06 06:47:03.131132] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:45.915 06:47:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:20:45.915 00:20:45.915 real 0m16.667s 00:20:45.915 user 0m21.415s 00:20:45.915 sys 0m2.097s 00:20:45.915 ************************************ 00:20:45.915 END TEST raid5f_rebuild_test 00:20:45.915 ************************************ 00:20:45.915 06:47:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:45.915 06:47:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.915 06:47:04 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:20:45.915 06:47:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:45.915 06:47:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:45.915 06:47:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:45.915 ************************************ 00:20:45.915 START TEST raid5f_rebuild_test_sb 00:20:45.915 ************************************ 00:20:45.915 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:20:45.915 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:20:45.915 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:20:45.915 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:45.915 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:45.915 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:45.915 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:45.915 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:45.915 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:45.915 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:45.915 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:45.915 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:45.915 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:45.916 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:45.916 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:45.916 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:45.916 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:45.916 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:45.916 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:45.916 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:45.916 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:45.916 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:45.916 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:45.916 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:45.916 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:20:45.916 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:20:45.916 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:20:45.916 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:20:45.916 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:45.916 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:45.916 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82571 00:20:45.916 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:45.916 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82571 00:20:45.916 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82571 ']' 00:20:45.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.916 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.916 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:45.916 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.916 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:45.916 06:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.916 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:45.916 Zero copy mechanism will not be used. 00:20:45.916 [2024-12-06 06:47:04.366933] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:20:45.916 [2024-12-06 06:47:04.367118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82571 ] 00:20:45.916 [2024-12-06 06:47:04.548978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.173 [2024-12-06 06:47:04.680811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.430 [2024-12-06 06:47:04.907281] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:46.430 [2024-12-06 06:47:04.907361] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.997 BaseBdev1_malloc 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.997 [2024-12-06 06:47:05.425363] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:46.997 [2024-12-06 06:47:05.425441] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.997 [2024-12-06 06:47:05.425472] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:46.997 [2024-12-06 06:47:05.425491] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.997 [2024-12-06 06:47:05.428464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.997 [2024-12-06 06:47:05.428540] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:46.997 BaseBdev1 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.997 BaseBdev2_malloc 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.997 [2024-12-06 06:47:05.480423] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:46.997 [2024-12-06 06:47:05.480539] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.997 [2024-12-06 06:47:05.480579] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:46.997 [2024-12-06 06:47:05.480597] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.997 [2024-12-06 06:47:05.483516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.997 [2024-12-06 06:47:05.483589] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:46.997 BaseBdev2 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.997 BaseBdev3_malloc 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.997 [2024-12-06 06:47:05.543504] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:46.997 [2024-12-06 06:47:05.543590] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.997 [2024-12-06 06:47:05.543633] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:46.997 [2024-12-06 06:47:05.543650] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.997 [2024-12-06 06:47:05.546487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.997 [2024-12-06 06:47:05.546556] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:46.997 BaseBdev3 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.997 spare_malloc 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.997 spare_delay 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.997 [2024-12-06 06:47:05.602498] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:46.997 [2024-12-06 06:47:05.602595] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.997 [2024-12-06 06:47:05.602637] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:46.997 [2024-12-06 06:47:05.602663] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.997 [2024-12-06 06:47:05.605689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.997 [2024-12-06 06:47:05.605870] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:46.997 spare 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.997 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.997 [2024-12-06 06:47:05.610669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:46.997 [2024-12-06 06:47:05.613221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:46.998 [2024-12-06 06:47:05.613485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:46.998 [2024-12-06 06:47:05.613820] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:46.998 [2024-12-06 06:47:05.613841] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:46.998 [2024-12-06 06:47:05.614159] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:46.998 [2024-12-06 06:47:05.619877] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:46.998 [2024-12-06 06:47:05.620047] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:46.998 [2024-12-06 06:47:05.620300] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:46.998 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.998 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:46.998 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:46.998 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:46.998 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:46.998 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:46.998 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:46.998 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:46.998 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:46.998 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:46.998 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:46.998 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.998 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.998 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.998 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.256 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.256 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.256 "name": "raid_bdev1", 00:20:47.256 "uuid": "028afb46-df08-42e7-bd7d-984f3df3724c", 00:20:47.256 "strip_size_kb": 64, 00:20:47.256 "state": "online", 00:20:47.256 "raid_level": "raid5f", 00:20:47.256 "superblock": true, 00:20:47.256 "num_base_bdevs": 3, 00:20:47.256 "num_base_bdevs_discovered": 3, 00:20:47.256 "num_base_bdevs_operational": 3, 00:20:47.256 "base_bdevs_list": [ 00:20:47.256 { 00:20:47.256 "name": "BaseBdev1", 00:20:47.256 "uuid": "4e19510d-6602-53ca-b9a5-b344cfc0e5c3", 00:20:47.256 "is_configured": true, 00:20:47.256 "data_offset": 2048, 00:20:47.256 "data_size": 63488 00:20:47.256 }, 00:20:47.256 { 00:20:47.256 "name": "BaseBdev2", 00:20:47.256 "uuid": "fbdf3405-72ae-5c12-bccd-1057bd5be010", 00:20:47.256 "is_configured": true, 00:20:47.256 "data_offset": 2048, 00:20:47.256 "data_size": 63488 00:20:47.256 }, 00:20:47.256 { 00:20:47.256 "name": "BaseBdev3", 00:20:47.256 "uuid": "5ffca18a-229d-55ee-bb90-807bd45cb0ef", 00:20:47.256 "is_configured": true, 00:20:47.256 "data_offset": 2048, 00:20:47.256 "data_size": 63488 00:20:47.256 } 00:20:47.256 ] 00:20:47.256 }' 00:20:47.256 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.256 06:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.822 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:47.822 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:47.822 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.822 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.822 [2024-12-06 06:47:06.178689] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:47.822 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.822 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:20:47.822 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.822 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.822 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.822 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:47.822 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.822 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:20:47.822 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:47.822 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:47.822 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:47.822 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:47.822 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:47.822 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:47.822 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:47.822 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:47.822 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:47.822 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:47.822 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:47.822 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:47.822 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:48.150 [2024-12-06 06:47:06.594621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:48.150 /dev/nbd0 00:20:48.150 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:48.150 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:48.150 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:48.150 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:20:48.150 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:48.150 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:48.150 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:48.150 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:20:48.150 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:48.150 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:48.150 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:48.150 1+0 records in 00:20:48.150 1+0 records out 00:20:48.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000620334 s, 6.6 MB/s 00:20:48.150 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:48.150 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:20:48.150 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:48.150 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:48.150 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:20:48.150 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:48.150 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:48.150 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:20:48.150 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:20:48.150 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:20:48.150 06:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:20:48.717 496+0 records in 00:20:48.717 496+0 records out 00:20:48.717 65011712 bytes (65 MB, 62 MiB) copied, 0.533989 s, 122 MB/s 00:20:48.717 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:48.717 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:48.717 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:48.717 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:48.717 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:48.717 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:48.717 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:48.976 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:48.976 [2024-12-06 06:47:07.524098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:48.976 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:48.976 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:48.976 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:48.976 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:48.976 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:48.976 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:48.976 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:48.976 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:48.976 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.976 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.976 [2024-12-06 06:47:07.538182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:48.976 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.976 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:48.976 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:48.976 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:48.976 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:48.976 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:48.976 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:48.976 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:48.976 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:48.976 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:48.976 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:48.976 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.976 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.976 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.976 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.976 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.976 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:48.976 "name": "raid_bdev1", 00:20:48.976 "uuid": "028afb46-df08-42e7-bd7d-984f3df3724c", 00:20:48.976 "strip_size_kb": 64, 00:20:48.976 "state": "online", 00:20:48.976 "raid_level": "raid5f", 00:20:48.976 "superblock": true, 00:20:48.976 "num_base_bdevs": 3, 00:20:48.976 "num_base_bdevs_discovered": 2, 00:20:48.976 "num_base_bdevs_operational": 2, 00:20:48.976 "base_bdevs_list": [ 00:20:48.976 { 00:20:48.976 "name": null, 00:20:48.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.976 "is_configured": false, 00:20:48.976 "data_offset": 0, 00:20:48.976 "data_size": 63488 00:20:48.976 }, 00:20:48.976 { 00:20:48.976 "name": "BaseBdev2", 00:20:48.976 "uuid": "fbdf3405-72ae-5c12-bccd-1057bd5be010", 00:20:48.976 "is_configured": true, 00:20:48.976 "data_offset": 2048, 00:20:48.976 "data_size": 63488 00:20:48.976 }, 00:20:48.976 { 00:20:48.976 "name": "BaseBdev3", 00:20:48.976 "uuid": "5ffca18a-229d-55ee-bb90-807bd45cb0ef", 00:20:48.976 "is_configured": true, 00:20:48.976 "data_offset": 2048, 00:20:48.976 "data_size": 63488 00:20:48.976 } 00:20:48.976 ] 00:20:48.976 }' 00:20:48.976 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:48.976 06:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.543 06:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:49.543 06:47:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.543 06:47:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.543 [2024-12-06 06:47:08.054346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:49.543 [2024-12-06 06:47:08.069649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:20:49.543 06:47:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.543 06:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:49.543 [2024-12-06 06:47:08.076985] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:50.479 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:50.479 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:50.479 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:50.479 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:50.479 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:50.479 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.479 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.480 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.480 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.480 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.738 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:50.738 "name": "raid_bdev1", 00:20:50.738 "uuid": "028afb46-df08-42e7-bd7d-984f3df3724c", 00:20:50.738 "strip_size_kb": 64, 00:20:50.738 "state": "online", 00:20:50.738 "raid_level": "raid5f", 00:20:50.738 "superblock": true, 00:20:50.738 "num_base_bdevs": 3, 00:20:50.738 "num_base_bdevs_discovered": 3, 00:20:50.739 "num_base_bdevs_operational": 3, 00:20:50.739 "process": { 00:20:50.739 "type": "rebuild", 00:20:50.739 "target": "spare", 00:20:50.739 "progress": { 00:20:50.739 "blocks": 18432, 00:20:50.739 "percent": 14 00:20:50.739 } 00:20:50.739 }, 00:20:50.739 "base_bdevs_list": [ 00:20:50.739 { 00:20:50.739 "name": "spare", 00:20:50.739 "uuid": "1bfad234-7027-59fc-9c75-803f8ed27771", 00:20:50.739 "is_configured": true, 00:20:50.739 "data_offset": 2048, 00:20:50.739 "data_size": 63488 00:20:50.739 }, 00:20:50.739 { 00:20:50.739 "name": "BaseBdev2", 00:20:50.739 "uuid": "fbdf3405-72ae-5c12-bccd-1057bd5be010", 00:20:50.739 "is_configured": true, 00:20:50.739 "data_offset": 2048, 00:20:50.739 "data_size": 63488 00:20:50.739 }, 00:20:50.739 { 00:20:50.739 "name": "BaseBdev3", 00:20:50.739 "uuid": "5ffca18a-229d-55ee-bb90-807bd45cb0ef", 00:20:50.739 "is_configured": true, 00:20:50.739 "data_offset": 2048, 00:20:50.739 "data_size": 63488 00:20:50.739 } 00:20:50.739 ] 00:20:50.739 }' 00:20:50.739 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:50.739 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:50.739 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:50.739 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:50.739 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:50.739 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.739 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.739 [2024-12-06 06:47:09.251645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:50.739 [2024-12-06 06:47:09.292000] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:50.739 [2024-12-06 06:47:09.292362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.739 [2024-12-06 06:47:09.292545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:50.739 [2024-12-06 06:47:09.292604] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:50.739 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.739 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:50.739 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:50.739 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:50.739 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:50.739 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:50.739 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:50.739 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:50.739 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:50.739 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:50.739 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:50.739 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.739 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.739 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.739 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.739 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.739 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:50.739 "name": "raid_bdev1", 00:20:50.739 "uuid": "028afb46-df08-42e7-bd7d-984f3df3724c", 00:20:50.739 "strip_size_kb": 64, 00:20:50.739 "state": "online", 00:20:50.739 "raid_level": "raid5f", 00:20:50.739 "superblock": true, 00:20:50.739 "num_base_bdevs": 3, 00:20:50.739 "num_base_bdevs_discovered": 2, 00:20:50.739 "num_base_bdevs_operational": 2, 00:20:50.739 "base_bdevs_list": [ 00:20:50.739 { 00:20:50.739 "name": null, 00:20:50.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.739 "is_configured": false, 00:20:50.739 "data_offset": 0, 00:20:50.739 "data_size": 63488 00:20:50.739 }, 00:20:50.739 { 00:20:50.739 "name": "BaseBdev2", 00:20:50.739 "uuid": "fbdf3405-72ae-5c12-bccd-1057bd5be010", 00:20:50.739 "is_configured": true, 00:20:50.739 "data_offset": 2048, 00:20:50.739 "data_size": 63488 00:20:50.739 }, 00:20:50.739 { 00:20:50.739 "name": "BaseBdev3", 00:20:50.739 "uuid": "5ffca18a-229d-55ee-bb90-807bd45cb0ef", 00:20:50.739 "is_configured": true, 00:20:50.739 "data_offset": 2048, 00:20:50.739 "data_size": 63488 00:20:50.739 } 00:20:50.739 ] 00:20:50.739 }' 00:20:50.739 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:50.739 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.306 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:51.306 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:51.306 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:51.306 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:51.306 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:51.306 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.306 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.306 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.306 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.306 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.306 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:51.306 "name": "raid_bdev1", 00:20:51.306 "uuid": "028afb46-df08-42e7-bd7d-984f3df3724c", 00:20:51.306 "strip_size_kb": 64, 00:20:51.306 "state": "online", 00:20:51.306 "raid_level": "raid5f", 00:20:51.306 "superblock": true, 00:20:51.306 "num_base_bdevs": 3, 00:20:51.306 "num_base_bdevs_discovered": 2, 00:20:51.306 "num_base_bdevs_operational": 2, 00:20:51.306 "base_bdevs_list": [ 00:20:51.306 { 00:20:51.306 "name": null, 00:20:51.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.306 "is_configured": false, 00:20:51.306 "data_offset": 0, 00:20:51.306 "data_size": 63488 00:20:51.306 }, 00:20:51.306 { 00:20:51.306 "name": "BaseBdev2", 00:20:51.306 "uuid": "fbdf3405-72ae-5c12-bccd-1057bd5be010", 00:20:51.306 "is_configured": true, 00:20:51.306 "data_offset": 2048, 00:20:51.306 "data_size": 63488 00:20:51.306 }, 00:20:51.306 { 00:20:51.306 "name": "BaseBdev3", 00:20:51.306 "uuid": "5ffca18a-229d-55ee-bb90-807bd45cb0ef", 00:20:51.306 "is_configured": true, 00:20:51.306 "data_offset": 2048, 00:20:51.306 "data_size": 63488 00:20:51.306 } 00:20:51.306 ] 00:20:51.306 }' 00:20:51.306 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:51.565 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:51.565 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:51.565 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:51.565 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:51.565 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.565 06:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.565 [2024-12-06 06:47:10.004268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:51.565 [2024-12-06 06:47:10.019272] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:20:51.565 06:47:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.565 06:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:51.565 [2024-12-06 06:47:10.026678] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:52.567 "name": "raid_bdev1", 00:20:52.567 "uuid": "028afb46-df08-42e7-bd7d-984f3df3724c", 00:20:52.567 "strip_size_kb": 64, 00:20:52.567 "state": "online", 00:20:52.567 "raid_level": "raid5f", 00:20:52.567 "superblock": true, 00:20:52.567 "num_base_bdevs": 3, 00:20:52.567 "num_base_bdevs_discovered": 3, 00:20:52.567 "num_base_bdevs_operational": 3, 00:20:52.567 "process": { 00:20:52.567 "type": "rebuild", 00:20:52.567 "target": "spare", 00:20:52.567 "progress": { 00:20:52.567 "blocks": 18432, 00:20:52.567 "percent": 14 00:20:52.567 } 00:20:52.567 }, 00:20:52.567 "base_bdevs_list": [ 00:20:52.567 { 00:20:52.567 "name": "spare", 00:20:52.567 "uuid": "1bfad234-7027-59fc-9c75-803f8ed27771", 00:20:52.567 "is_configured": true, 00:20:52.567 "data_offset": 2048, 00:20:52.567 "data_size": 63488 00:20:52.567 }, 00:20:52.567 { 00:20:52.567 "name": "BaseBdev2", 00:20:52.567 "uuid": "fbdf3405-72ae-5c12-bccd-1057bd5be010", 00:20:52.567 "is_configured": true, 00:20:52.567 "data_offset": 2048, 00:20:52.567 "data_size": 63488 00:20:52.567 }, 00:20:52.567 { 00:20:52.567 "name": "BaseBdev3", 00:20:52.567 "uuid": "5ffca18a-229d-55ee-bb90-807bd45cb0ef", 00:20:52.567 "is_configured": true, 00:20:52.567 "data_offset": 2048, 00:20:52.567 "data_size": 63488 00:20:52.567 } 00:20:52.567 ] 00:20:52.567 }' 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:52.567 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=611 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.567 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:52.567 "name": "raid_bdev1", 00:20:52.567 "uuid": "028afb46-df08-42e7-bd7d-984f3df3724c", 00:20:52.567 "strip_size_kb": 64, 00:20:52.567 "state": "online", 00:20:52.567 "raid_level": "raid5f", 00:20:52.567 "superblock": true, 00:20:52.567 "num_base_bdevs": 3, 00:20:52.567 "num_base_bdevs_discovered": 3, 00:20:52.567 "num_base_bdevs_operational": 3, 00:20:52.567 "process": { 00:20:52.567 "type": "rebuild", 00:20:52.567 "target": "spare", 00:20:52.567 "progress": { 00:20:52.567 "blocks": 22528, 00:20:52.567 "percent": 17 00:20:52.567 } 00:20:52.567 }, 00:20:52.567 "base_bdevs_list": [ 00:20:52.567 { 00:20:52.567 "name": "spare", 00:20:52.567 "uuid": "1bfad234-7027-59fc-9c75-803f8ed27771", 00:20:52.567 "is_configured": true, 00:20:52.567 "data_offset": 2048, 00:20:52.567 "data_size": 63488 00:20:52.567 }, 00:20:52.567 { 00:20:52.567 "name": "BaseBdev2", 00:20:52.567 "uuid": "fbdf3405-72ae-5c12-bccd-1057bd5be010", 00:20:52.567 "is_configured": true, 00:20:52.567 "data_offset": 2048, 00:20:52.567 "data_size": 63488 00:20:52.567 }, 00:20:52.567 { 00:20:52.567 "name": "BaseBdev3", 00:20:52.568 "uuid": "5ffca18a-229d-55ee-bb90-807bd45cb0ef", 00:20:52.568 "is_configured": true, 00:20:52.568 "data_offset": 2048, 00:20:52.568 "data_size": 63488 00:20:52.568 } 00:20:52.568 ] 00:20:52.568 }' 00:20:52.826 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:52.826 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:52.826 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:52.826 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:52.826 06:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:53.762 06:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:53.762 06:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:53.762 06:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:53.762 06:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:53.762 06:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:53.762 06:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:53.762 06:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.762 06:47:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.762 06:47:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.762 06:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.762 06:47:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.762 06:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:53.762 "name": "raid_bdev1", 00:20:53.762 "uuid": "028afb46-df08-42e7-bd7d-984f3df3724c", 00:20:53.762 "strip_size_kb": 64, 00:20:53.762 "state": "online", 00:20:53.762 "raid_level": "raid5f", 00:20:53.762 "superblock": true, 00:20:53.762 "num_base_bdevs": 3, 00:20:53.762 "num_base_bdevs_discovered": 3, 00:20:53.762 "num_base_bdevs_operational": 3, 00:20:53.762 "process": { 00:20:53.762 "type": "rebuild", 00:20:53.762 "target": "spare", 00:20:53.762 "progress": { 00:20:53.762 "blocks": 45056, 00:20:53.762 "percent": 35 00:20:53.762 } 00:20:53.762 }, 00:20:53.762 "base_bdevs_list": [ 00:20:53.762 { 00:20:53.762 "name": "spare", 00:20:53.762 "uuid": "1bfad234-7027-59fc-9c75-803f8ed27771", 00:20:53.762 "is_configured": true, 00:20:53.762 "data_offset": 2048, 00:20:53.762 "data_size": 63488 00:20:53.762 }, 00:20:53.762 { 00:20:53.762 "name": "BaseBdev2", 00:20:53.762 "uuid": "fbdf3405-72ae-5c12-bccd-1057bd5be010", 00:20:53.762 "is_configured": true, 00:20:53.762 "data_offset": 2048, 00:20:53.762 "data_size": 63488 00:20:53.762 }, 00:20:53.762 { 00:20:53.762 "name": "BaseBdev3", 00:20:53.762 "uuid": "5ffca18a-229d-55ee-bb90-807bd45cb0ef", 00:20:53.762 "is_configured": true, 00:20:53.762 "data_offset": 2048, 00:20:53.762 "data_size": 63488 00:20:53.762 } 00:20:53.762 ] 00:20:53.762 }' 00:20:53.762 06:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:54.021 06:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:54.021 06:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:54.021 06:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:54.021 06:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:54.956 06:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:54.956 06:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:54.956 06:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:54.956 06:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:54.956 06:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:54.956 06:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:54.956 06:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.956 06:47:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.956 06:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.956 06:47:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.956 06:47:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.956 06:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:54.956 "name": "raid_bdev1", 00:20:54.956 "uuid": "028afb46-df08-42e7-bd7d-984f3df3724c", 00:20:54.956 "strip_size_kb": 64, 00:20:54.956 "state": "online", 00:20:54.956 "raid_level": "raid5f", 00:20:54.956 "superblock": true, 00:20:54.956 "num_base_bdevs": 3, 00:20:54.956 "num_base_bdevs_discovered": 3, 00:20:54.956 "num_base_bdevs_operational": 3, 00:20:54.956 "process": { 00:20:54.956 "type": "rebuild", 00:20:54.956 "target": "spare", 00:20:54.956 "progress": { 00:20:54.956 "blocks": 69632, 00:20:54.956 "percent": 54 00:20:54.956 } 00:20:54.956 }, 00:20:54.956 "base_bdevs_list": [ 00:20:54.956 { 00:20:54.956 "name": "spare", 00:20:54.956 "uuid": "1bfad234-7027-59fc-9c75-803f8ed27771", 00:20:54.956 "is_configured": true, 00:20:54.956 "data_offset": 2048, 00:20:54.956 "data_size": 63488 00:20:54.956 }, 00:20:54.956 { 00:20:54.956 "name": "BaseBdev2", 00:20:54.956 "uuid": "fbdf3405-72ae-5c12-bccd-1057bd5be010", 00:20:54.956 "is_configured": true, 00:20:54.956 "data_offset": 2048, 00:20:54.956 "data_size": 63488 00:20:54.956 }, 00:20:54.956 { 00:20:54.956 "name": "BaseBdev3", 00:20:54.956 "uuid": "5ffca18a-229d-55ee-bb90-807bd45cb0ef", 00:20:54.956 "is_configured": true, 00:20:54.956 "data_offset": 2048, 00:20:54.956 "data_size": 63488 00:20:54.956 } 00:20:54.956 ] 00:20:54.956 }' 00:20:54.956 06:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:54.956 06:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:54.956 06:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:55.215 06:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:55.215 06:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:56.151 06:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:56.151 06:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:56.151 06:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:56.151 06:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:56.151 06:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:56.151 06:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:56.151 06:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.151 06:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.151 06:47:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.151 06:47:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.151 06:47:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.151 06:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:56.151 "name": "raid_bdev1", 00:20:56.151 "uuid": "028afb46-df08-42e7-bd7d-984f3df3724c", 00:20:56.151 "strip_size_kb": 64, 00:20:56.151 "state": "online", 00:20:56.151 "raid_level": "raid5f", 00:20:56.151 "superblock": true, 00:20:56.151 "num_base_bdevs": 3, 00:20:56.151 "num_base_bdevs_discovered": 3, 00:20:56.151 "num_base_bdevs_operational": 3, 00:20:56.151 "process": { 00:20:56.151 "type": "rebuild", 00:20:56.151 "target": "spare", 00:20:56.151 "progress": { 00:20:56.151 "blocks": 92160, 00:20:56.151 "percent": 72 00:20:56.151 } 00:20:56.151 }, 00:20:56.151 "base_bdevs_list": [ 00:20:56.151 { 00:20:56.151 "name": "spare", 00:20:56.151 "uuid": "1bfad234-7027-59fc-9c75-803f8ed27771", 00:20:56.151 "is_configured": true, 00:20:56.151 "data_offset": 2048, 00:20:56.151 "data_size": 63488 00:20:56.151 }, 00:20:56.151 { 00:20:56.151 "name": "BaseBdev2", 00:20:56.151 "uuid": "fbdf3405-72ae-5c12-bccd-1057bd5be010", 00:20:56.151 "is_configured": true, 00:20:56.151 "data_offset": 2048, 00:20:56.151 "data_size": 63488 00:20:56.151 }, 00:20:56.151 { 00:20:56.151 "name": "BaseBdev3", 00:20:56.151 "uuid": "5ffca18a-229d-55ee-bb90-807bd45cb0ef", 00:20:56.151 "is_configured": true, 00:20:56.151 "data_offset": 2048, 00:20:56.151 "data_size": 63488 00:20:56.151 } 00:20:56.151 ] 00:20:56.151 }' 00:20:56.151 06:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:56.151 06:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:56.151 06:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:56.409 06:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:56.409 06:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:57.350 06:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:57.350 06:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:57.350 06:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:57.350 06:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:57.350 06:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:57.350 06:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:57.350 06:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.350 06:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.350 06:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:57.350 06:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.350 06:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.350 06:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:57.350 "name": "raid_bdev1", 00:20:57.350 "uuid": "028afb46-df08-42e7-bd7d-984f3df3724c", 00:20:57.350 "strip_size_kb": 64, 00:20:57.350 "state": "online", 00:20:57.350 "raid_level": "raid5f", 00:20:57.350 "superblock": true, 00:20:57.350 "num_base_bdevs": 3, 00:20:57.350 "num_base_bdevs_discovered": 3, 00:20:57.350 "num_base_bdevs_operational": 3, 00:20:57.350 "process": { 00:20:57.350 "type": "rebuild", 00:20:57.350 "target": "spare", 00:20:57.350 "progress": { 00:20:57.350 "blocks": 116736, 00:20:57.350 "percent": 91 00:20:57.350 } 00:20:57.350 }, 00:20:57.350 "base_bdevs_list": [ 00:20:57.350 { 00:20:57.350 "name": "spare", 00:20:57.350 "uuid": "1bfad234-7027-59fc-9c75-803f8ed27771", 00:20:57.350 "is_configured": true, 00:20:57.350 "data_offset": 2048, 00:20:57.350 "data_size": 63488 00:20:57.350 }, 00:20:57.350 { 00:20:57.350 "name": "BaseBdev2", 00:20:57.350 "uuid": "fbdf3405-72ae-5c12-bccd-1057bd5be010", 00:20:57.350 "is_configured": true, 00:20:57.350 "data_offset": 2048, 00:20:57.350 "data_size": 63488 00:20:57.350 }, 00:20:57.350 { 00:20:57.350 "name": "BaseBdev3", 00:20:57.350 "uuid": "5ffca18a-229d-55ee-bb90-807bd45cb0ef", 00:20:57.350 "is_configured": true, 00:20:57.350 "data_offset": 2048, 00:20:57.350 "data_size": 63488 00:20:57.350 } 00:20:57.350 ] 00:20:57.350 }' 00:20:57.350 06:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:57.350 06:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:57.350 06:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:57.350 06:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:57.350 06:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:57.917 [2024-12-06 06:47:16.304095] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:57.918 [2024-12-06 06:47:16.304459] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:57.918 [2024-12-06 06:47:16.304667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:58.485 06:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:58.485 06:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:58.485 06:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:58.485 06:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:58.485 06:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:58.485 06:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:58.485 06:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.485 06:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.485 06:47:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.485 06:47:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.485 06:47:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.485 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:58.485 "name": "raid_bdev1", 00:20:58.485 "uuid": "028afb46-df08-42e7-bd7d-984f3df3724c", 00:20:58.485 "strip_size_kb": 64, 00:20:58.485 "state": "online", 00:20:58.485 "raid_level": "raid5f", 00:20:58.485 "superblock": true, 00:20:58.485 "num_base_bdevs": 3, 00:20:58.485 "num_base_bdevs_discovered": 3, 00:20:58.485 "num_base_bdevs_operational": 3, 00:20:58.485 "base_bdevs_list": [ 00:20:58.485 { 00:20:58.485 "name": "spare", 00:20:58.485 "uuid": "1bfad234-7027-59fc-9c75-803f8ed27771", 00:20:58.485 "is_configured": true, 00:20:58.485 "data_offset": 2048, 00:20:58.485 "data_size": 63488 00:20:58.485 }, 00:20:58.485 { 00:20:58.485 "name": "BaseBdev2", 00:20:58.485 "uuid": "fbdf3405-72ae-5c12-bccd-1057bd5be010", 00:20:58.485 "is_configured": true, 00:20:58.485 "data_offset": 2048, 00:20:58.485 "data_size": 63488 00:20:58.485 }, 00:20:58.485 { 00:20:58.485 "name": "BaseBdev3", 00:20:58.485 "uuid": "5ffca18a-229d-55ee-bb90-807bd45cb0ef", 00:20:58.485 "is_configured": true, 00:20:58.485 "data_offset": 2048, 00:20:58.485 "data_size": 63488 00:20:58.485 } 00:20:58.485 ] 00:20:58.485 }' 00:20:58.485 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:58.485 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:58.485 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:58.485 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:58.485 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:20:58.485 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:58.485 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:58.485 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:58.485 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:58.485 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:58.485 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.485 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.485 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.485 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.744 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.744 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:58.744 "name": "raid_bdev1", 00:20:58.744 "uuid": "028afb46-df08-42e7-bd7d-984f3df3724c", 00:20:58.744 "strip_size_kb": 64, 00:20:58.744 "state": "online", 00:20:58.744 "raid_level": "raid5f", 00:20:58.744 "superblock": true, 00:20:58.744 "num_base_bdevs": 3, 00:20:58.744 "num_base_bdevs_discovered": 3, 00:20:58.744 "num_base_bdevs_operational": 3, 00:20:58.744 "base_bdevs_list": [ 00:20:58.745 { 00:20:58.745 "name": "spare", 00:20:58.745 "uuid": "1bfad234-7027-59fc-9c75-803f8ed27771", 00:20:58.745 "is_configured": true, 00:20:58.745 "data_offset": 2048, 00:20:58.745 "data_size": 63488 00:20:58.745 }, 00:20:58.745 { 00:20:58.745 "name": "BaseBdev2", 00:20:58.745 "uuid": "fbdf3405-72ae-5c12-bccd-1057bd5be010", 00:20:58.745 "is_configured": true, 00:20:58.745 "data_offset": 2048, 00:20:58.745 "data_size": 63488 00:20:58.745 }, 00:20:58.745 { 00:20:58.745 "name": "BaseBdev3", 00:20:58.745 "uuid": "5ffca18a-229d-55ee-bb90-807bd45cb0ef", 00:20:58.745 "is_configured": true, 00:20:58.745 "data_offset": 2048, 00:20:58.745 "data_size": 63488 00:20:58.745 } 00:20:58.745 ] 00:20:58.745 }' 00:20:58.745 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:58.745 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:58.745 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:58.745 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:58.745 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:58.745 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:58.745 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:58.745 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:58.745 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:58.745 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:58.745 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:58.745 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:58.745 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:58.745 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:58.745 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.745 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.745 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.745 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.745 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.745 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:58.745 "name": "raid_bdev1", 00:20:58.745 "uuid": "028afb46-df08-42e7-bd7d-984f3df3724c", 00:20:58.745 "strip_size_kb": 64, 00:20:58.745 "state": "online", 00:20:58.745 "raid_level": "raid5f", 00:20:58.745 "superblock": true, 00:20:58.745 "num_base_bdevs": 3, 00:20:58.745 "num_base_bdevs_discovered": 3, 00:20:58.745 "num_base_bdevs_operational": 3, 00:20:58.745 "base_bdevs_list": [ 00:20:58.745 { 00:20:58.745 "name": "spare", 00:20:58.745 "uuid": "1bfad234-7027-59fc-9c75-803f8ed27771", 00:20:58.745 "is_configured": true, 00:20:58.745 "data_offset": 2048, 00:20:58.745 "data_size": 63488 00:20:58.745 }, 00:20:58.745 { 00:20:58.745 "name": "BaseBdev2", 00:20:58.745 "uuid": "fbdf3405-72ae-5c12-bccd-1057bd5be010", 00:20:58.745 "is_configured": true, 00:20:58.745 "data_offset": 2048, 00:20:58.745 "data_size": 63488 00:20:58.745 }, 00:20:58.745 { 00:20:58.745 "name": "BaseBdev3", 00:20:58.745 "uuid": "5ffca18a-229d-55ee-bb90-807bd45cb0ef", 00:20:58.745 "is_configured": true, 00:20:58.745 "data_offset": 2048, 00:20:58.745 "data_size": 63488 00:20:58.745 } 00:20:58.745 ] 00:20:58.745 }' 00:20:58.745 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:58.745 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.313 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:59.313 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.313 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.313 [2024-12-06 06:47:17.820337] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:59.313 [2024-12-06 06:47:17.820507] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:59.313 [2024-12-06 06:47:17.820664] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:59.313 [2024-12-06 06:47:17.820774] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:59.313 [2024-12-06 06:47:17.820800] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:59.313 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.313 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.313 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.313 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.313 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:20:59.313 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.313 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:59.313 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:59.313 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:59.313 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:59.314 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:59.314 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:59.314 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:59.314 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:59.314 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:59.314 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:59.314 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:59.314 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:59.314 06:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:59.881 /dev/nbd0 00:20:59.881 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:59.881 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:59.881 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:59.881 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:20:59.881 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:59.881 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:59.881 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:59.881 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:20:59.881 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:59.881 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:59.881 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:59.881 1+0 records in 00:20:59.881 1+0 records out 00:20:59.881 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298068 s, 13.7 MB/s 00:20:59.881 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:59.881 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:20:59.881 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:59.881 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:59.881 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:20:59.881 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:59.881 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:59.881 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:00.140 /dev/nbd1 00:21:00.140 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:00.140 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:00.140 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:00.140 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:21:00.140 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:00.140 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:00.140 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:00.140 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:21:00.140 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:00.140 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:00.140 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:00.140 1+0 records in 00:21:00.140 1+0 records out 00:21:00.140 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277484 s, 14.8 MB/s 00:21:00.140 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:00.140 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:21:00.140 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:00.140 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:00.140 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:21:00.140 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:00.140 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:00.140 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:00.398 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:00.398 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:00.398 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:00.398 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:00.398 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:00.398 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:00.398 06:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:00.656 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:00.656 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:00.656 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:00.656 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:00.656 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:00.656 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:00.656 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:00.656 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:00.656 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:00.656 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:00.915 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:00.915 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:00.915 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:00.915 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:00.915 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:00.915 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:00.915 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:00.915 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:00.915 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:00.915 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:00.915 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.915 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.915 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.915 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:00.915 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.915 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.915 [2024-12-06 06:47:19.417756] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:00.915 [2024-12-06 06:47:19.417964] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:00.915 [2024-12-06 06:47:19.418035] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:00.915 [2024-12-06 06:47:19.418683] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:00.915 [2024-12-06 06:47:19.422022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:00.915 [2024-12-06 06:47:19.422299] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:00.915 [2024-12-06 06:47:19.422645] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:00.915 spare 00:21:00.915 [2024-12-06 06:47:19.422849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:00.916 [2024-12-06 06:47:19.423120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:00.916 [2024-12-06 06:47:19.423270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:00.916 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.916 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:00.916 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.916 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.916 [2024-12-06 06:47:19.523427] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:00.916 [2024-12-06 06:47:19.523783] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:00.916 [2024-12-06 06:47:19.524284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:21:00.916 [2024-12-06 06:47:19.529491] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:00.916 [2024-12-06 06:47:19.529654] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:00.916 [2024-12-06 06:47:19.530077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:00.916 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.916 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:00.916 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:00.916 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:00.916 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:00.916 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:00.916 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:00.916 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:00.916 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:00.916 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:00.916 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:00.916 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.916 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.916 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.916 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.174 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.174 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:01.174 "name": "raid_bdev1", 00:21:01.174 "uuid": "028afb46-df08-42e7-bd7d-984f3df3724c", 00:21:01.174 "strip_size_kb": 64, 00:21:01.174 "state": "online", 00:21:01.174 "raid_level": "raid5f", 00:21:01.174 "superblock": true, 00:21:01.174 "num_base_bdevs": 3, 00:21:01.174 "num_base_bdevs_discovered": 3, 00:21:01.174 "num_base_bdevs_operational": 3, 00:21:01.174 "base_bdevs_list": [ 00:21:01.174 { 00:21:01.174 "name": "spare", 00:21:01.174 "uuid": "1bfad234-7027-59fc-9c75-803f8ed27771", 00:21:01.174 "is_configured": true, 00:21:01.174 "data_offset": 2048, 00:21:01.174 "data_size": 63488 00:21:01.174 }, 00:21:01.174 { 00:21:01.174 "name": "BaseBdev2", 00:21:01.174 "uuid": "fbdf3405-72ae-5c12-bccd-1057bd5be010", 00:21:01.174 "is_configured": true, 00:21:01.174 "data_offset": 2048, 00:21:01.174 "data_size": 63488 00:21:01.174 }, 00:21:01.174 { 00:21:01.174 "name": "BaseBdev3", 00:21:01.174 "uuid": "5ffca18a-229d-55ee-bb90-807bd45cb0ef", 00:21:01.174 "is_configured": true, 00:21:01.174 "data_offset": 2048, 00:21:01.174 "data_size": 63488 00:21:01.174 } 00:21:01.174 ] 00:21:01.174 }' 00:21:01.174 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:01.174 06:47:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:01.741 "name": "raid_bdev1", 00:21:01.741 "uuid": "028afb46-df08-42e7-bd7d-984f3df3724c", 00:21:01.741 "strip_size_kb": 64, 00:21:01.741 "state": "online", 00:21:01.741 "raid_level": "raid5f", 00:21:01.741 "superblock": true, 00:21:01.741 "num_base_bdevs": 3, 00:21:01.741 "num_base_bdevs_discovered": 3, 00:21:01.741 "num_base_bdevs_operational": 3, 00:21:01.741 "base_bdevs_list": [ 00:21:01.741 { 00:21:01.741 "name": "spare", 00:21:01.741 "uuid": "1bfad234-7027-59fc-9c75-803f8ed27771", 00:21:01.741 "is_configured": true, 00:21:01.741 "data_offset": 2048, 00:21:01.741 "data_size": 63488 00:21:01.741 }, 00:21:01.741 { 00:21:01.741 "name": "BaseBdev2", 00:21:01.741 "uuid": "fbdf3405-72ae-5c12-bccd-1057bd5be010", 00:21:01.741 "is_configured": true, 00:21:01.741 "data_offset": 2048, 00:21:01.741 "data_size": 63488 00:21:01.741 }, 00:21:01.741 { 00:21:01.741 "name": "BaseBdev3", 00:21:01.741 "uuid": "5ffca18a-229d-55ee-bb90-807bd45cb0ef", 00:21:01.741 "is_configured": true, 00:21:01.741 "data_offset": 2048, 00:21:01.741 "data_size": 63488 00:21:01.741 } 00:21:01.741 ] 00:21:01.741 }' 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:01.741 [2024-12-06 06:47:20.324226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.741 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:01.741 "name": "raid_bdev1", 00:21:01.741 "uuid": "028afb46-df08-42e7-bd7d-984f3df3724c", 00:21:01.741 "strip_size_kb": 64, 00:21:01.741 "state": "online", 00:21:01.741 "raid_level": "raid5f", 00:21:01.741 "superblock": true, 00:21:01.741 "num_base_bdevs": 3, 00:21:01.741 "num_base_bdevs_discovered": 2, 00:21:01.741 "num_base_bdevs_operational": 2, 00:21:01.741 "base_bdevs_list": [ 00:21:01.741 { 00:21:01.741 "name": null, 00:21:01.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.741 "is_configured": false, 00:21:01.741 "data_offset": 0, 00:21:01.741 "data_size": 63488 00:21:01.741 }, 00:21:01.741 { 00:21:01.741 "name": "BaseBdev2", 00:21:01.741 "uuid": "fbdf3405-72ae-5c12-bccd-1057bd5be010", 00:21:01.741 "is_configured": true, 00:21:01.741 "data_offset": 2048, 00:21:01.741 "data_size": 63488 00:21:01.741 }, 00:21:01.742 { 00:21:01.742 "name": "BaseBdev3", 00:21:01.742 "uuid": "5ffca18a-229d-55ee-bb90-807bd45cb0ef", 00:21:01.742 "is_configured": true, 00:21:01.742 "data_offset": 2048, 00:21:01.742 "data_size": 63488 00:21:01.742 } 00:21:01.742 ] 00:21:01.742 }' 00:21:01.742 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:01.742 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:02.306 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:02.306 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.306 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:02.306 [2024-12-06 06:47:20.832409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:02.306 [2024-12-06 06:47:20.832687] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:02.306 [2024-12-06 06:47:20.832717] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:02.306 [2024-12-06 06:47:20.833336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:02.306 [2024-12-06 06:47:20.848167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:21:02.306 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.306 06:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:02.306 [2024-12-06 06:47:20.855559] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:03.240 06:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:03.240 06:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:03.240 06:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:03.240 06:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:03.240 06:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:03.240 06:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.240 06:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.240 06:47:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.240 06:47:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.240 06:47:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.497 06:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:03.497 "name": "raid_bdev1", 00:21:03.497 "uuid": "028afb46-df08-42e7-bd7d-984f3df3724c", 00:21:03.497 "strip_size_kb": 64, 00:21:03.497 "state": "online", 00:21:03.497 "raid_level": "raid5f", 00:21:03.497 "superblock": true, 00:21:03.497 "num_base_bdevs": 3, 00:21:03.497 "num_base_bdevs_discovered": 3, 00:21:03.497 "num_base_bdevs_operational": 3, 00:21:03.497 "process": { 00:21:03.497 "type": "rebuild", 00:21:03.497 "target": "spare", 00:21:03.497 "progress": { 00:21:03.497 "blocks": 18432, 00:21:03.497 "percent": 14 00:21:03.497 } 00:21:03.497 }, 00:21:03.497 "base_bdevs_list": [ 00:21:03.497 { 00:21:03.497 "name": "spare", 00:21:03.498 "uuid": "1bfad234-7027-59fc-9c75-803f8ed27771", 00:21:03.498 "is_configured": true, 00:21:03.498 "data_offset": 2048, 00:21:03.498 "data_size": 63488 00:21:03.498 }, 00:21:03.498 { 00:21:03.498 "name": "BaseBdev2", 00:21:03.498 "uuid": "fbdf3405-72ae-5c12-bccd-1057bd5be010", 00:21:03.498 "is_configured": true, 00:21:03.498 "data_offset": 2048, 00:21:03.498 "data_size": 63488 00:21:03.498 }, 00:21:03.498 { 00:21:03.498 "name": "BaseBdev3", 00:21:03.498 "uuid": "5ffca18a-229d-55ee-bb90-807bd45cb0ef", 00:21:03.498 "is_configured": true, 00:21:03.498 "data_offset": 2048, 00:21:03.498 "data_size": 63488 00:21:03.498 } 00:21:03.498 ] 00:21:03.498 }' 00:21:03.498 06:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:03.498 06:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:03.498 06:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:03.498 06:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:03.498 06:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:03.498 06:47:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.498 06:47:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.498 [2024-12-06 06:47:22.034875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:03.498 [2024-12-06 06:47:22.071150] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:03.498 [2024-12-06 06:47:22.072106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:03.498 [2024-12-06 06:47:22.072149] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:03.498 [2024-12-06 06:47:22.072168] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:03.498 06:47:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.498 06:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:03.498 06:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:03.498 06:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:03.498 06:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:03.498 06:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:03.498 06:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:03.498 06:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:03.498 06:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:03.498 06:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:03.498 06:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:03.498 06:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.498 06:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.498 06:47:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.498 06:47:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.498 06:47:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.756 06:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:03.756 "name": "raid_bdev1", 00:21:03.756 "uuid": "028afb46-df08-42e7-bd7d-984f3df3724c", 00:21:03.756 "strip_size_kb": 64, 00:21:03.756 "state": "online", 00:21:03.756 "raid_level": "raid5f", 00:21:03.756 "superblock": true, 00:21:03.756 "num_base_bdevs": 3, 00:21:03.756 "num_base_bdevs_discovered": 2, 00:21:03.756 "num_base_bdevs_operational": 2, 00:21:03.756 "base_bdevs_list": [ 00:21:03.756 { 00:21:03.756 "name": null, 00:21:03.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.756 "is_configured": false, 00:21:03.756 "data_offset": 0, 00:21:03.756 "data_size": 63488 00:21:03.756 }, 00:21:03.756 { 00:21:03.756 "name": "BaseBdev2", 00:21:03.756 "uuid": "fbdf3405-72ae-5c12-bccd-1057bd5be010", 00:21:03.756 "is_configured": true, 00:21:03.756 "data_offset": 2048, 00:21:03.756 "data_size": 63488 00:21:03.756 }, 00:21:03.756 { 00:21:03.756 "name": "BaseBdev3", 00:21:03.756 "uuid": "5ffca18a-229d-55ee-bb90-807bd45cb0ef", 00:21:03.756 "is_configured": true, 00:21:03.756 "data_offset": 2048, 00:21:03.756 "data_size": 63488 00:21:03.756 } 00:21:03.756 ] 00:21:03.756 }' 00:21:03.756 06:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:03.756 06:47:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:04.014 06:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:04.014 06:47:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.014 06:47:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:04.014 [2024-12-06 06:47:22.655996] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:04.014 [2024-12-06 06:47:22.656412] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:04.014 [2024-12-06 06:47:22.656458] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:21:04.014 [2024-12-06 06:47:22.656482] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:04.014 [2024-12-06 06:47:22.657155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:04.014 [2024-12-06 06:47:22.657197] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:04.014 [2024-12-06 06:47:22.657324] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:04.014 [2024-12-06 06:47:22.657354] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:04.014 [2024-12-06 06:47:22.657368] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:04.014 [2024-12-06 06:47:22.657402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:04.272 spare 00:21:04.272 [2024-12-06 06:47:22.671912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:21:04.272 06:47:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.272 06:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:04.272 [2024-12-06 06:47:22.679168] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:05.229 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:05.230 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:05.230 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:05.230 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:05.230 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:05.230 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.230 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.230 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.230 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.230 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.230 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:05.230 "name": "raid_bdev1", 00:21:05.230 "uuid": "028afb46-df08-42e7-bd7d-984f3df3724c", 00:21:05.230 "strip_size_kb": 64, 00:21:05.230 "state": "online", 00:21:05.230 "raid_level": "raid5f", 00:21:05.230 "superblock": true, 00:21:05.230 "num_base_bdevs": 3, 00:21:05.230 "num_base_bdevs_discovered": 3, 00:21:05.230 "num_base_bdevs_operational": 3, 00:21:05.230 "process": { 00:21:05.230 "type": "rebuild", 00:21:05.230 "target": "spare", 00:21:05.230 "progress": { 00:21:05.230 "blocks": 18432, 00:21:05.230 "percent": 14 00:21:05.230 } 00:21:05.230 }, 00:21:05.230 "base_bdevs_list": [ 00:21:05.230 { 00:21:05.230 "name": "spare", 00:21:05.230 "uuid": "1bfad234-7027-59fc-9c75-803f8ed27771", 00:21:05.230 "is_configured": true, 00:21:05.230 "data_offset": 2048, 00:21:05.230 "data_size": 63488 00:21:05.230 }, 00:21:05.230 { 00:21:05.230 "name": "BaseBdev2", 00:21:05.230 "uuid": "fbdf3405-72ae-5c12-bccd-1057bd5be010", 00:21:05.230 "is_configured": true, 00:21:05.230 "data_offset": 2048, 00:21:05.230 "data_size": 63488 00:21:05.230 }, 00:21:05.230 { 00:21:05.230 "name": "BaseBdev3", 00:21:05.230 "uuid": "5ffca18a-229d-55ee-bb90-807bd45cb0ef", 00:21:05.230 "is_configured": true, 00:21:05.230 "data_offset": 2048, 00:21:05.230 "data_size": 63488 00:21:05.230 } 00:21:05.230 ] 00:21:05.230 }' 00:21:05.230 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:05.230 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:05.230 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:05.230 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:05.230 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:05.230 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.230 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.230 [2024-12-06 06:47:23.837399] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:05.564 [2024-12-06 06:47:23.894870] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:05.564 [2024-12-06 06:47:23.895121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:05.564 [2024-12-06 06:47:23.895157] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:05.564 [2024-12-06 06:47:23.895171] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:05.564 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.564 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:05.564 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:05.564 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:05.564 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:05.564 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:05.564 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:05.564 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:05.564 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:05.564 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:05.564 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:05.564 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.564 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.564 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.564 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.564 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.564 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:05.564 "name": "raid_bdev1", 00:21:05.564 "uuid": "028afb46-df08-42e7-bd7d-984f3df3724c", 00:21:05.564 "strip_size_kb": 64, 00:21:05.564 "state": "online", 00:21:05.564 "raid_level": "raid5f", 00:21:05.564 "superblock": true, 00:21:05.564 "num_base_bdevs": 3, 00:21:05.564 "num_base_bdevs_discovered": 2, 00:21:05.564 "num_base_bdevs_operational": 2, 00:21:05.564 "base_bdevs_list": [ 00:21:05.564 { 00:21:05.564 "name": null, 00:21:05.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.564 "is_configured": false, 00:21:05.564 "data_offset": 0, 00:21:05.564 "data_size": 63488 00:21:05.564 }, 00:21:05.564 { 00:21:05.564 "name": "BaseBdev2", 00:21:05.564 "uuid": "fbdf3405-72ae-5c12-bccd-1057bd5be010", 00:21:05.564 "is_configured": true, 00:21:05.564 "data_offset": 2048, 00:21:05.564 "data_size": 63488 00:21:05.564 }, 00:21:05.564 { 00:21:05.564 "name": "BaseBdev3", 00:21:05.564 "uuid": "5ffca18a-229d-55ee-bb90-807bd45cb0ef", 00:21:05.564 "is_configured": true, 00:21:05.564 "data_offset": 2048, 00:21:05.564 "data_size": 63488 00:21:05.564 } 00:21:05.564 ] 00:21:05.564 }' 00:21:05.564 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:05.564 06:47:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.132 06:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:06.132 06:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:06.132 06:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:06.132 06:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:06.132 06:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:06.132 06:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.132 06:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.132 06:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.132 06:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.132 06:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.132 06:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:06.132 "name": "raid_bdev1", 00:21:06.132 "uuid": "028afb46-df08-42e7-bd7d-984f3df3724c", 00:21:06.132 "strip_size_kb": 64, 00:21:06.132 "state": "online", 00:21:06.132 "raid_level": "raid5f", 00:21:06.132 "superblock": true, 00:21:06.132 "num_base_bdevs": 3, 00:21:06.132 "num_base_bdevs_discovered": 2, 00:21:06.132 "num_base_bdevs_operational": 2, 00:21:06.132 "base_bdevs_list": [ 00:21:06.132 { 00:21:06.132 "name": null, 00:21:06.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.132 "is_configured": false, 00:21:06.132 "data_offset": 0, 00:21:06.132 "data_size": 63488 00:21:06.132 }, 00:21:06.132 { 00:21:06.132 "name": "BaseBdev2", 00:21:06.132 "uuid": "fbdf3405-72ae-5c12-bccd-1057bd5be010", 00:21:06.132 "is_configured": true, 00:21:06.132 "data_offset": 2048, 00:21:06.132 "data_size": 63488 00:21:06.132 }, 00:21:06.132 { 00:21:06.132 "name": "BaseBdev3", 00:21:06.132 "uuid": "5ffca18a-229d-55ee-bb90-807bd45cb0ef", 00:21:06.132 "is_configured": true, 00:21:06.132 "data_offset": 2048, 00:21:06.132 "data_size": 63488 00:21:06.132 } 00:21:06.132 ] 00:21:06.132 }' 00:21:06.132 06:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:06.132 06:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:06.132 06:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:06.132 06:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:06.132 06:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:06.132 06:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.132 06:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.132 06:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.132 06:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:06.132 06:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.132 06:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.132 [2024-12-06 06:47:24.634166] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:06.132 [2024-12-06 06:47:24.634380] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.132 [2024-12-06 06:47:24.634558] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:21:06.132 [2024-12-06 06:47:24.634693] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.132 [2024-12-06 06:47:24.635443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.132 [2024-12-06 06:47:24.635487] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:06.132 [2024-12-06 06:47:24.635617] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:06.132 [2024-12-06 06:47:24.635641] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:06.132 [2024-12-06 06:47:24.635669] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:06.132 [2024-12-06 06:47:24.635682] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:06.132 BaseBdev1 00:21:06.132 06:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.132 06:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:07.065 06:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:07.065 06:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:07.065 06:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:07.065 06:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:07.065 06:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:07.065 06:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:07.065 06:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:07.065 06:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:07.065 06:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:07.065 06:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:07.065 06:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.066 06:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.066 06:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.066 06:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.066 06:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.066 06:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:07.066 "name": "raid_bdev1", 00:21:07.066 "uuid": "028afb46-df08-42e7-bd7d-984f3df3724c", 00:21:07.066 "strip_size_kb": 64, 00:21:07.066 "state": "online", 00:21:07.066 "raid_level": "raid5f", 00:21:07.066 "superblock": true, 00:21:07.066 "num_base_bdevs": 3, 00:21:07.066 "num_base_bdevs_discovered": 2, 00:21:07.066 "num_base_bdevs_operational": 2, 00:21:07.066 "base_bdevs_list": [ 00:21:07.066 { 00:21:07.066 "name": null, 00:21:07.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.066 "is_configured": false, 00:21:07.066 "data_offset": 0, 00:21:07.066 "data_size": 63488 00:21:07.066 }, 00:21:07.066 { 00:21:07.066 "name": "BaseBdev2", 00:21:07.066 "uuid": "fbdf3405-72ae-5c12-bccd-1057bd5be010", 00:21:07.066 "is_configured": true, 00:21:07.066 "data_offset": 2048, 00:21:07.066 "data_size": 63488 00:21:07.066 }, 00:21:07.066 { 00:21:07.066 "name": "BaseBdev3", 00:21:07.066 "uuid": "5ffca18a-229d-55ee-bb90-807bd45cb0ef", 00:21:07.066 "is_configured": true, 00:21:07.066 "data_offset": 2048, 00:21:07.066 "data_size": 63488 00:21:07.066 } 00:21:07.066 ] 00:21:07.066 }' 00:21:07.066 06:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:07.066 06:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.631 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:07.631 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:07.631 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:07.631 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:07.631 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:07.631 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.631 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.631 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.631 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.631 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.631 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:07.631 "name": "raid_bdev1", 00:21:07.631 "uuid": "028afb46-df08-42e7-bd7d-984f3df3724c", 00:21:07.631 "strip_size_kb": 64, 00:21:07.631 "state": "online", 00:21:07.631 "raid_level": "raid5f", 00:21:07.631 "superblock": true, 00:21:07.631 "num_base_bdevs": 3, 00:21:07.631 "num_base_bdevs_discovered": 2, 00:21:07.631 "num_base_bdevs_operational": 2, 00:21:07.631 "base_bdevs_list": [ 00:21:07.631 { 00:21:07.631 "name": null, 00:21:07.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.631 "is_configured": false, 00:21:07.631 "data_offset": 0, 00:21:07.631 "data_size": 63488 00:21:07.631 }, 00:21:07.631 { 00:21:07.631 "name": "BaseBdev2", 00:21:07.631 "uuid": "fbdf3405-72ae-5c12-bccd-1057bd5be010", 00:21:07.631 "is_configured": true, 00:21:07.631 "data_offset": 2048, 00:21:07.631 "data_size": 63488 00:21:07.631 }, 00:21:07.631 { 00:21:07.631 "name": "BaseBdev3", 00:21:07.631 "uuid": "5ffca18a-229d-55ee-bb90-807bd45cb0ef", 00:21:07.631 "is_configured": true, 00:21:07.631 "data_offset": 2048, 00:21:07.631 "data_size": 63488 00:21:07.631 } 00:21:07.631 ] 00:21:07.631 }' 00:21:07.631 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:07.631 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:07.631 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:07.890 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:07.890 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:07.890 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:21:07.890 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:07.890 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:07.890 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:07.890 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:07.890 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:07.890 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:07.890 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.890 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.890 [2024-12-06 06:47:26.310691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:07.890 [2024-12-06 06:47:26.311049] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:07.890 [2024-12-06 06:47:26.311083] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:07.890 request: 00:21:07.890 { 00:21:07.890 "base_bdev": "BaseBdev1", 00:21:07.890 "raid_bdev": "raid_bdev1", 00:21:07.890 "method": "bdev_raid_add_base_bdev", 00:21:07.890 "req_id": 1 00:21:07.890 } 00:21:07.890 Got JSON-RPC error response 00:21:07.890 response: 00:21:07.890 { 00:21:07.890 "code": -22, 00:21:07.890 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:07.890 } 00:21:07.890 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:07.890 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:21:07.890 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:07.890 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:07.890 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:07.890 06:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:08.825 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:08.825 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:08.825 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:08.825 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:08.825 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:08.825 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:08.825 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:08.825 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:08.825 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:08.825 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:08.825 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.825 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.825 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.825 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.825 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.825 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:08.825 "name": "raid_bdev1", 00:21:08.825 "uuid": "028afb46-df08-42e7-bd7d-984f3df3724c", 00:21:08.825 "strip_size_kb": 64, 00:21:08.825 "state": "online", 00:21:08.825 "raid_level": "raid5f", 00:21:08.825 "superblock": true, 00:21:08.825 "num_base_bdevs": 3, 00:21:08.825 "num_base_bdevs_discovered": 2, 00:21:08.825 "num_base_bdevs_operational": 2, 00:21:08.825 "base_bdevs_list": [ 00:21:08.825 { 00:21:08.825 "name": null, 00:21:08.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.825 "is_configured": false, 00:21:08.825 "data_offset": 0, 00:21:08.825 "data_size": 63488 00:21:08.825 }, 00:21:08.825 { 00:21:08.825 "name": "BaseBdev2", 00:21:08.825 "uuid": "fbdf3405-72ae-5c12-bccd-1057bd5be010", 00:21:08.825 "is_configured": true, 00:21:08.825 "data_offset": 2048, 00:21:08.825 "data_size": 63488 00:21:08.825 }, 00:21:08.825 { 00:21:08.825 "name": "BaseBdev3", 00:21:08.825 "uuid": "5ffca18a-229d-55ee-bb90-807bd45cb0ef", 00:21:08.825 "is_configured": true, 00:21:08.825 "data_offset": 2048, 00:21:08.825 "data_size": 63488 00:21:08.825 } 00:21:08.825 ] 00:21:08.825 }' 00:21:08.825 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:08.825 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.395 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:09.395 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:09.395 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:09.395 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:09.395 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:09.395 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.395 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.395 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.395 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.395 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.395 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:09.395 "name": "raid_bdev1", 00:21:09.395 "uuid": "028afb46-df08-42e7-bd7d-984f3df3724c", 00:21:09.395 "strip_size_kb": 64, 00:21:09.395 "state": "online", 00:21:09.395 "raid_level": "raid5f", 00:21:09.395 "superblock": true, 00:21:09.395 "num_base_bdevs": 3, 00:21:09.395 "num_base_bdevs_discovered": 2, 00:21:09.395 "num_base_bdevs_operational": 2, 00:21:09.395 "base_bdevs_list": [ 00:21:09.395 { 00:21:09.395 "name": null, 00:21:09.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.395 "is_configured": false, 00:21:09.395 "data_offset": 0, 00:21:09.395 "data_size": 63488 00:21:09.395 }, 00:21:09.395 { 00:21:09.395 "name": "BaseBdev2", 00:21:09.395 "uuid": "fbdf3405-72ae-5c12-bccd-1057bd5be010", 00:21:09.395 "is_configured": true, 00:21:09.395 "data_offset": 2048, 00:21:09.395 "data_size": 63488 00:21:09.395 }, 00:21:09.395 { 00:21:09.395 "name": "BaseBdev3", 00:21:09.395 "uuid": "5ffca18a-229d-55ee-bb90-807bd45cb0ef", 00:21:09.395 "is_configured": true, 00:21:09.395 "data_offset": 2048, 00:21:09.395 "data_size": 63488 00:21:09.395 } 00:21:09.395 ] 00:21:09.395 }' 00:21:09.395 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:09.395 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:09.395 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:09.395 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:09.395 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82571 00:21:09.395 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82571 ']' 00:21:09.395 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82571 00:21:09.395 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:21:09.395 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:09.395 06:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82571 00:21:09.395 killing process with pid 82571 00:21:09.395 Received shutdown signal, test time was about 60.000000 seconds 00:21:09.395 00:21:09.395 Latency(us) 00:21:09.395 [2024-12-06T06:47:28.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.395 [2024-12-06T06:47:28.042Z] =================================================================================================================== 00:21:09.395 [2024-12-06T06:47:28.042Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:09.395 06:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:09.395 06:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:09.395 06:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82571' 00:21:09.395 06:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82571 00:21:09.395 [2024-12-06 06:47:28.019109] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:09.395 06:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82571 00:21:09.395 [2024-12-06 06:47:28.019265] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:09.395 [2024-12-06 06:47:28.019352] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:09.395 [2024-12-06 06:47:28.019373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:09.964 [2024-12-06 06:47:28.383111] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:10.901 06:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:21:10.901 00:21:10.901 real 0m25.210s 00:21:10.901 user 0m33.682s 00:21:10.901 sys 0m2.654s 00:21:10.901 ************************************ 00:21:10.901 END TEST raid5f_rebuild_test_sb 00:21:10.901 ************************************ 00:21:10.901 06:47:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:10.901 06:47:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.901 06:47:29 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:21:10.901 06:47:29 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:21:10.901 06:47:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:10.901 06:47:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:10.901 06:47:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:10.901 ************************************ 00:21:10.901 START TEST raid5f_state_function_test 00:21:10.901 ************************************ 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:10.901 Process raid pid: 83336 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83336 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83336' 00:21:10.901 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:10.902 06:47:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83336 00:21:10.902 06:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83336 ']' 00:21:10.902 06:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.902 06:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.902 06:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.902 06:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.902 06:47:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.161 [2024-12-06 06:47:29.633372] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:21:11.161 [2024-12-06 06:47:29.633990] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.421 [2024-12-06 06:47:29.826862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.421 [2024-12-06 06:47:29.961892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.681 [2024-12-06 06:47:30.171638] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:11.681 [2024-12-06 06:47:30.171917] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:12.249 06:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:12.249 06:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:21:12.249 06:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:12.249 06:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.249 06:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.249 [2024-12-06 06:47:30.661818] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:12.249 [2024-12-06 06:47:30.662039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:12.249 [2024-12-06 06:47:30.662180] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:12.249 [2024-12-06 06:47:30.662244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:12.249 [2024-12-06 06:47:30.662262] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:12.249 [2024-12-06 06:47:30.662279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:12.249 [2024-12-06 06:47:30.662289] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:12.249 [2024-12-06 06:47:30.662305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:12.249 06:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.249 06:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:12.249 06:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:12.249 06:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:12.249 06:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:12.250 06:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:12.250 06:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:12.250 06:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:12.250 06:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:12.250 06:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:12.250 06:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:12.250 06:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.250 06:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.250 06:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.250 06:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:12.250 06:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.250 06:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:12.250 "name": "Existed_Raid", 00:21:12.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.250 "strip_size_kb": 64, 00:21:12.250 "state": "configuring", 00:21:12.250 "raid_level": "raid5f", 00:21:12.250 "superblock": false, 00:21:12.250 "num_base_bdevs": 4, 00:21:12.250 "num_base_bdevs_discovered": 0, 00:21:12.250 "num_base_bdevs_operational": 4, 00:21:12.250 "base_bdevs_list": [ 00:21:12.250 { 00:21:12.250 "name": "BaseBdev1", 00:21:12.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.250 "is_configured": false, 00:21:12.250 "data_offset": 0, 00:21:12.250 "data_size": 0 00:21:12.250 }, 00:21:12.250 { 00:21:12.250 "name": "BaseBdev2", 00:21:12.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.250 "is_configured": false, 00:21:12.250 "data_offset": 0, 00:21:12.250 "data_size": 0 00:21:12.250 }, 00:21:12.250 { 00:21:12.250 "name": "BaseBdev3", 00:21:12.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.250 "is_configured": false, 00:21:12.250 "data_offset": 0, 00:21:12.250 "data_size": 0 00:21:12.250 }, 00:21:12.250 { 00:21:12.250 "name": "BaseBdev4", 00:21:12.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.250 "is_configured": false, 00:21:12.250 "data_offset": 0, 00:21:12.250 "data_size": 0 00:21:12.250 } 00:21:12.250 ] 00:21:12.250 }' 00:21:12.250 06:47:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:12.250 06:47:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.818 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:12.818 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.818 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.818 [2024-12-06 06:47:31.197885] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:12.818 [2024-12-06 06:47:31.198071] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:12.818 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.818 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:12.818 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.818 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.818 [2024-12-06 06:47:31.205880] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:12.818 [2024-12-06 06:47:31.205932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:12.818 [2024-12-06 06:47:31.205948] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:12.818 [2024-12-06 06:47:31.205965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:12.818 [2024-12-06 06:47:31.205975] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:12.818 [2024-12-06 06:47:31.205990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:12.818 [2024-12-06 06:47:31.205999] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:12.818 [2024-12-06 06:47:31.206014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:12.818 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.818 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:12.818 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.818 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.818 [2024-12-06 06:47:31.252457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:12.818 BaseBdev1 00:21:12.818 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.818 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:12.818 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:12.818 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:12.818 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:12.818 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:12.818 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:12.818 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:12.818 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.818 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.818 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.818 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:12.818 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.818 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.818 [ 00:21:12.818 { 00:21:12.818 "name": "BaseBdev1", 00:21:12.818 "aliases": [ 00:21:12.818 "f361bc2a-f1d8-414f-806e-ca9d2e4dc0b7" 00:21:12.818 ], 00:21:12.818 "product_name": "Malloc disk", 00:21:12.818 "block_size": 512, 00:21:12.818 "num_blocks": 65536, 00:21:12.818 "uuid": "f361bc2a-f1d8-414f-806e-ca9d2e4dc0b7", 00:21:12.818 "assigned_rate_limits": { 00:21:12.818 "rw_ios_per_sec": 0, 00:21:12.818 "rw_mbytes_per_sec": 0, 00:21:12.818 "r_mbytes_per_sec": 0, 00:21:12.818 "w_mbytes_per_sec": 0 00:21:12.819 }, 00:21:12.819 "claimed": true, 00:21:12.819 "claim_type": "exclusive_write", 00:21:12.819 "zoned": false, 00:21:12.819 "supported_io_types": { 00:21:12.819 "read": true, 00:21:12.819 "write": true, 00:21:12.819 "unmap": true, 00:21:12.819 "flush": true, 00:21:12.819 "reset": true, 00:21:12.819 "nvme_admin": false, 00:21:12.819 "nvme_io": false, 00:21:12.819 "nvme_io_md": false, 00:21:12.819 "write_zeroes": true, 00:21:12.819 "zcopy": true, 00:21:12.819 "get_zone_info": false, 00:21:12.819 "zone_management": false, 00:21:12.819 "zone_append": false, 00:21:12.819 "compare": false, 00:21:12.819 "compare_and_write": false, 00:21:12.819 "abort": true, 00:21:12.819 "seek_hole": false, 00:21:12.819 "seek_data": false, 00:21:12.819 "copy": true, 00:21:12.819 "nvme_iov_md": false 00:21:12.819 }, 00:21:12.819 "memory_domains": [ 00:21:12.819 { 00:21:12.819 "dma_device_id": "system", 00:21:12.819 "dma_device_type": 1 00:21:12.819 }, 00:21:12.819 { 00:21:12.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.819 "dma_device_type": 2 00:21:12.819 } 00:21:12.819 ], 00:21:12.819 "driver_specific": {} 00:21:12.819 } 00:21:12.819 ] 00:21:12.819 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.819 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:12.819 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:12.819 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:12.819 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:12.819 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:12.819 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:12.819 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:12.819 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:12.819 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:12.819 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:12.819 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:12.819 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.819 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:12.819 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.819 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.819 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.819 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:12.819 "name": "Existed_Raid", 00:21:12.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.819 "strip_size_kb": 64, 00:21:12.819 "state": "configuring", 00:21:12.819 "raid_level": "raid5f", 00:21:12.819 "superblock": false, 00:21:12.819 "num_base_bdevs": 4, 00:21:12.819 "num_base_bdevs_discovered": 1, 00:21:12.819 "num_base_bdevs_operational": 4, 00:21:12.819 "base_bdevs_list": [ 00:21:12.819 { 00:21:12.819 "name": "BaseBdev1", 00:21:12.819 "uuid": "f361bc2a-f1d8-414f-806e-ca9d2e4dc0b7", 00:21:12.819 "is_configured": true, 00:21:12.819 "data_offset": 0, 00:21:12.819 "data_size": 65536 00:21:12.819 }, 00:21:12.819 { 00:21:12.819 "name": "BaseBdev2", 00:21:12.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.819 "is_configured": false, 00:21:12.819 "data_offset": 0, 00:21:12.819 "data_size": 0 00:21:12.819 }, 00:21:12.819 { 00:21:12.819 "name": "BaseBdev3", 00:21:12.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.819 "is_configured": false, 00:21:12.819 "data_offset": 0, 00:21:12.819 "data_size": 0 00:21:12.819 }, 00:21:12.819 { 00:21:12.819 "name": "BaseBdev4", 00:21:12.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.819 "is_configured": false, 00:21:12.819 "data_offset": 0, 00:21:12.819 "data_size": 0 00:21:12.819 } 00:21:12.819 ] 00:21:12.819 }' 00:21:12.819 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:12.819 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.387 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:13.387 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.387 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.387 [2024-12-06 06:47:31.804601] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:13.387 [2024-12-06 06:47:31.804666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:13.387 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.387 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:13.387 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.387 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.387 [2024-12-06 06:47:31.812620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:13.387 [2024-12-06 06:47:31.815234] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:13.387 [2024-12-06 06:47:31.815417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:13.387 [2024-12-06 06:47:31.815552] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:13.387 [2024-12-06 06:47:31.815698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:13.387 [2024-12-06 06:47:31.815815] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:13.387 [2024-12-06 06:47:31.815875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:13.387 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.387 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:13.387 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:13.387 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:13.387 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:13.387 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:13.387 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:13.387 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:13.387 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:13.387 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:13.387 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:13.387 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:13.388 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:13.388 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.388 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:13.388 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.388 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.388 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.388 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:13.388 "name": "Existed_Raid", 00:21:13.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.388 "strip_size_kb": 64, 00:21:13.388 "state": "configuring", 00:21:13.388 "raid_level": "raid5f", 00:21:13.388 "superblock": false, 00:21:13.388 "num_base_bdevs": 4, 00:21:13.388 "num_base_bdevs_discovered": 1, 00:21:13.388 "num_base_bdevs_operational": 4, 00:21:13.388 "base_bdevs_list": [ 00:21:13.388 { 00:21:13.388 "name": "BaseBdev1", 00:21:13.388 "uuid": "f361bc2a-f1d8-414f-806e-ca9d2e4dc0b7", 00:21:13.388 "is_configured": true, 00:21:13.388 "data_offset": 0, 00:21:13.388 "data_size": 65536 00:21:13.388 }, 00:21:13.388 { 00:21:13.388 "name": "BaseBdev2", 00:21:13.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.388 "is_configured": false, 00:21:13.388 "data_offset": 0, 00:21:13.388 "data_size": 0 00:21:13.388 }, 00:21:13.388 { 00:21:13.388 "name": "BaseBdev3", 00:21:13.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.388 "is_configured": false, 00:21:13.388 "data_offset": 0, 00:21:13.388 "data_size": 0 00:21:13.388 }, 00:21:13.388 { 00:21:13.388 "name": "BaseBdev4", 00:21:13.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.388 "is_configured": false, 00:21:13.388 "data_offset": 0, 00:21:13.388 "data_size": 0 00:21:13.388 } 00:21:13.388 ] 00:21:13.388 }' 00:21:13.388 06:47:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:13.388 06:47:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.957 [2024-12-06 06:47:32.410108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:13.957 BaseBdev2 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.957 [ 00:21:13.957 { 00:21:13.957 "name": "BaseBdev2", 00:21:13.957 "aliases": [ 00:21:13.957 "aa061f13-4ad6-4f84-8d2d-00f235190217" 00:21:13.957 ], 00:21:13.957 "product_name": "Malloc disk", 00:21:13.957 "block_size": 512, 00:21:13.957 "num_blocks": 65536, 00:21:13.957 "uuid": "aa061f13-4ad6-4f84-8d2d-00f235190217", 00:21:13.957 "assigned_rate_limits": { 00:21:13.957 "rw_ios_per_sec": 0, 00:21:13.957 "rw_mbytes_per_sec": 0, 00:21:13.957 "r_mbytes_per_sec": 0, 00:21:13.957 "w_mbytes_per_sec": 0 00:21:13.957 }, 00:21:13.957 "claimed": true, 00:21:13.957 "claim_type": "exclusive_write", 00:21:13.957 "zoned": false, 00:21:13.957 "supported_io_types": { 00:21:13.957 "read": true, 00:21:13.957 "write": true, 00:21:13.957 "unmap": true, 00:21:13.957 "flush": true, 00:21:13.957 "reset": true, 00:21:13.957 "nvme_admin": false, 00:21:13.957 "nvme_io": false, 00:21:13.957 "nvme_io_md": false, 00:21:13.957 "write_zeroes": true, 00:21:13.957 "zcopy": true, 00:21:13.957 "get_zone_info": false, 00:21:13.957 "zone_management": false, 00:21:13.957 "zone_append": false, 00:21:13.957 "compare": false, 00:21:13.957 "compare_and_write": false, 00:21:13.957 "abort": true, 00:21:13.957 "seek_hole": false, 00:21:13.957 "seek_data": false, 00:21:13.957 "copy": true, 00:21:13.957 "nvme_iov_md": false 00:21:13.957 }, 00:21:13.957 "memory_domains": [ 00:21:13.957 { 00:21:13.957 "dma_device_id": "system", 00:21:13.957 "dma_device_type": 1 00:21:13.957 }, 00:21:13.957 { 00:21:13.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:13.957 "dma_device_type": 2 00:21:13.957 } 00:21:13.957 ], 00:21:13.957 "driver_specific": {} 00:21:13.957 } 00:21:13.957 ] 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:13.957 "name": "Existed_Raid", 00:21:13.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.957 "strip_size_kb": 64, 00:21:13.957 "state": "configuring", 00:21:13.957 "raid_level": "raid5f", 00:21:13.957 "superblock": false, 00:21:13.957 "num_base_bdevs": 4, 00:21:13.957 "num_base_bdevs_discovered": 2, 00:21:13.957 "num_base_bdevs_operational": 4, 00:21:13.957 "base_bdevs_list": [ 00:21:13.957 { 00:21:13.957 "name": "BaseBdev1", 00:21:13.957 "uuid": "f361bc2a-f1d8-414f-806e-ca9d2e4dc0b7", 00:21:13.957 "is_configured": true, 00:21:13.957 "data_offset": 0, 00:21:13.957 "data_size": 65536 00:21:13.957 }, 00:21:13.957 { 00:21:13.957 "name": "BaseBdev2", 00:21:13.957 "uuid": "aa061f13-4ad6-4f84-8d2d-00f235190217", 00:21:13.957 "is_configured": true, 00:21:13.957 "data_offset": 0, 00:21:13.957 "data_size": 65536 00:21:13.957 }, 00:21:13.957 { 00:21:13.957 "name": "BaseBdev3", 00:21:13.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.957 "is_configured": false, 00:21:13.957 "data_offset": 0, 00:21:13.957 "data_size": 0 00:21:13.957 }, 00:21:13.957 { 00:21:13.957 "name": "BaseBdev4", 00:21:13.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.957 "is_configured": false, 00:21:13.957 "data_offset": 0, 00:21:13.957 "data_size": 0 00:21:13.957 } 00:21:13.957 ] 00:21:13.957 }' 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:13.957 06:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.524 06:47:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:14.524 06:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.524 06:47:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.524 BaseBdev3 00:21:14.524 [2024-12-06 06:47:33.033643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.524 [ 00:21:14.524 { 00:21:14.524 "name": "BaseBdev3", 00:21:14.524 "aliases": [ 00:21:14.524 "21bec4a3-99c7-4c1e-9316-fbc23b4f4a81" 00:21:14.524 ], 00:21:14.524 "product_name": "Malloc disk", 00:21:14.524 "block_size": 512, 00:21:14.524 "num_blocks": 65536, 00:21:14.524 "uuid": "21bec4a3-99c7-4c1e-9316-fbc23b4f4a81", 00:21:14.524 "assigned_rate_limits": { 00:21:14.524 "rw_ios_per_sec": 0, 00:21:14.524 "rw_mbytes_per_sec": 0, 00:21:14.524 "r_mbytes_per_sec": 0, 00:21:14.524 "w_mbytes_per_sec": 0 00:21:14.524 }, 00:21:14.524 "claimed": true, 00:21:14.524 "claim_type": "exclusive_write", 00:21:14.524 "zoned": false, 00:21:14.524 "supported_io_types": { 00:21:14.524 "read": true, 00:21:14.524 "write": true, 00:21:14.524 "unmap": true, 00:21:14.524 "flush": true, 00:21:14.524 "reset": true, 00:21:14.524 "nvme_admin": false, 00:21:14.524 "nvme_io": false, 00:21:14.524 "nvme_io_md": false, 00:21:14.524 "write_zeroes": true, 00:21:14.524 "zcopy": true, 00:21:14.524 "get_zone_info": false, 00:21:14.524 "zone_management": false, 00:21:14.524 "zone_append": false, 00:21:14.524 "compare": false, 00:21:14.524 "compare_and_write": false, 00:21:14.524 "abort": true, 00:21:14.524 "seek_hole": false, 00:21:14.524 "seek_data": false, 00:21:14.524 "copy": true, 00:21:14.524 "nvme_iov_md": false 00:21:14.524 }, 00:21:14.524 "memory_domains": [ 00:21:14.524 { 00:21:14.524 "dma_device_id": "system", 00:21:14.524 "dma_device_type": 1 00:21:14.524 }, 00:21:14.524 { 00:21:14.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.524 "dma_device_type": 2 00:21:14.524 } 00:21:14.524 ], 00:21:14.524 "driver_specific": {} 00:21:14.524 } 00:21:14.524 ] 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:14.524 "name": "Existed_Raid", 00:21:14.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.524 "strip_size_kb": 64, 00:21:14.524 "state": "configuring", 00:21:14.524 "raid_level": "raid5f", 00:21:14.524 "superblock": false, 00:21:14.524 "num_base_bdevs": 4, 00:21:14.524 "num_base_bdevs_discovered": 3, 00:21:14.524 "num_base_bdevs_operational": 4, 00:21:14.524 "base_bdevs_list": [ 00:21:14.524 { 00:21:14.524 "name": "BaseBdev1", 00:21:14.524 "uuid": "f361bc2a-f1d8-414f-806e-ca9d2e4dc0b7", 00:21:14.524 "is_configured": true, 00:21:14.524 "data_offset": 0, 00:21:14.524 "data_size": 65536 00:21:14.524 }, 00:21:14.524 { 00:21:14.524 "name": "BaseBdev2", 00:21:14.524 "uuid": "aa061f13-4ad6-4f84-8d2d-00f235190217", 00:21:14.524 "is_configured": true, 00:21:14.524 "data_offset": 0, 00:21:14.524 "data_size": 65536 00:21:14.524 }, 00:21:14.524 { 00:21:14.524 "name": "BaseBdev3", 00:21:14.524 "uuid": "21bec4a3-99c7-4c1e-9316-fbc23b4f4a81", 00:21:14.524 "is_configured": true, 00:21:14.524 "data_offset": 0, 00:21:14.524 "data_size": 65536 00:21:14.524 }, 00:21:14.524 { 00:21:14.524 "name": "BaseBdev4", 00:21:14.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.524 "is_configured": false, 00:21:14.524 "data_offset": 0, 00:21:14.524 "data_size": 0 00:21:14.524 } 00:21:14.524 ] 00:21:14.524 }' 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:14.524 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.091 [2024-12-06 06:47:33.617080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:15.091 [2024-12-06 06:47:33.617189] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:15.091 [2024-12-06 06:47:33.617209] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:21:15.091 [2024-12-06 06:47:33.617681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:15.091 BaseBdev4 00:21:15.091 [2024-12-06 06:47:33.624636] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:15.091 [2024-12-06 06:47:33.624669] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:15.091 [2024-12-06 06:47:33.625032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.091 [ 00:21:15.091 { 00:21:15.091 "name": "BaseBdev4", 00:21:15.091 "aliases": [ 00:21:15.091 "17bff079-37fe-4538-9308-fb9478dce43f" 00:21:15.091 ], 00:21:15.091 "product_name": "Malloc disk", 00:21:15.091 "block_size": 512, 00:21:15.091 "num_blocks": 65536, 00:21:15.091 "uuid": "17bff079-37fe-4538-9308-fb9478dce43f", 00:21:15.091 "assigned_rate_limits": { 00:21:15.091 "rw_ios_per_sec": 0, 00:21:15.091 "rw_mbytes_per_sec": 0, 00:21:15.091 "r_mbytes_per_sec": 0, 00:21:15.091 "w_mbytes_per_sec": 0 00:21:15.091 }, 00:21:15.091 "claimed": true, 00:21:15.091 "claim_type": "exclusive_write", 00:21:15.091 "zoned": false, 00:21:15.091 "supported_io_types": { 00:21:15.091 "read": true, 00:21:15.091 "write": true, 00:21:15.091 "unmap": true, 00:21:15.091 "flush": true, 00:21:15.091 "reset": true, 00:21:15.091 "nvme_admin": false, 00:21:15.091 "nvme_io": false, 00:21:15.091 "nvme_io_md": false, 00:21:15.091 "write_zeroes": true, 00:21:15.091 "zcopy": true, 00:21:15.091 "get_zone_info": false, 00:21:15.091 "zone_management": false, 00:21:15.091 "zone_append": false, 00:21:15.091 "compare": false, 00:21:15.091 "compare_and_write": false, 00:21:15.091 "abort": true, 00:21:15.091 "seek_hole": false, 00:21:15.091 "seek_data": false, 00:21:15.091 "copy": true, 00:21:15.091 "nvme_iov_md": false 00:21:15.091 }, 00:21:15.091 "memory_domains": [ 00:21:15.091 { 00:21:15.091 "dma_device_id": "system", 00:21:15.091 "dma_device_type": 1 00:21:15.091 }, 00:21:15.091 { 00:21:15.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.091 "dma_device_type": 2 00:21:15.091 } 00:21:15.091 ], 00:21:15.091 "driver_specific": {} 00:21:15.091 } 00:21:15.091 ] 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:15.091 "name": "Existed_Raid", 00:21:15.091 "uuid": "0244f082-7f98-4bfe-973f-c6875a7d69f4", 00:21:15.091 "strip_size_kb": 64, 00:21:15.091 "state": "online", 00:21:15.091 "raid_level": "raid5f", 00:21:15.091 "superblock": false, 00:21:15.091 "num_base_bdevs": 4, 00:21:15.091 "num_base_bdevs_discovered": 4, 00:21:15.091 "num_base_bdevs_operational": 4, 00:21:15.091 "base_bdevs_list": [ 00:21:15.091 { 00:21:15.091 "name": "BaseBdev1", 00:21:15.091 "uuid": "f361bc2a-f1d8-414f-806e-ca9d2e4dc0b7", 00:21:15.091 "is_configured": true, 00:21:15.091 "data_offset": 0, 00:21:15.091 "data_size": 65536 00:21:15.091 }, 00:21:15.091 { 00:21:15.091 "name": "BaseBdev2", 00:21:15.091 "uuid": "aa061f13-4ad6-4f84-8d2d-00f235190217", 00:21:15.091 "is_configured": true, 00:21:15.091 "data_offset": 0, 00:21:15.091 "data_size": 65536 00:21:15.091 }, 00:21:15.091 { 00:21:15.091 "name": "BaseBdev3", 00:21:15.091 "uuid": "21bec4a3-99c7-4c1e-9316-fbc23b4f4a81", 00:21:15.091 "is_configured": true, 00:21:15.091 "data_offset": 0, 00:21:15.091 "data_size": 65536 00:21:15.091 }, 00:21:15.091 { 00:21:15.091 "name": "BaseBdev4", 00:21:15.091 "uuid": "17bff079-37fe-4538-9308-fb9478dce43f", 00:21:15.091 "is_configured": true, 00:21:15.091 "data_offset": 0, 00:21:15.091 "data_size": 65536 00:21:15.091 } 00:21:15.091 ] 00:21:15.091 }' 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:15.091 06:47:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.658 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:15.659 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:15.659 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:15.659 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:15.659 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:15.659 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:15.659 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:15.659 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:15.659 06:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.659 06:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.659 [2024-12-06 06:47:34.172792] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:15.659 06:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.659 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:15.659 "name": "Existed_Raid", 00:21:15.659 "aliases": [ 00:21:15.659 "0244f082-7f98-4bfe-973f-c6875a7d69f4" 00:21:15.659 ], 00:21:15.659 "product_name": "Raid Volume", 00:21:15.659 "block_size": 512, 00:21:15.659 "num_blocks": 196608, 00:21:15.659 "uuid": "0244f082-7f98-4bfe-973f-c6875a7d69f4", 00:21:15.659 "assigned_rate_limits": { 00:21:15.659 "rw_ios_per_sec": 0, 00:21:15.659 "rw_mbytes_per_sec": 0, 00:21:15.659 "r_mbytes_per_sec": 0, 00:21:15.659 "w_mbytes_per_sec": 0 00:21:15.659 }, 00:21:15.659 "claimed": false, 00:21:15.659 "zoned": false, 00:21:15.659 "supported_io_types": { 00:21:15.659 "read": true, 00:21:15.659 "write": true, 00:21:15.659 "unmap": false, 00:21:15.659 "flush": false, 00:21:15.659 "reset": true, 00:21:15.659 "nvme_admin": false, 00:21:15.659 "nvme_io": false, 00:21:15.659 "nvme_io_md": false, 00:21:15.659 "write_zeroes": true, 00:21:15.659 "zcopy": false, 00:21:15.659 "get_zone_info": false, 00:21:15.659 "zone_management": false, 00:21:15.659 "zone_append": false, 00:21:15.659 "compare": false, 00:21:15.659 "compare_and_write": false, 00:21:15.659 "abort": false, 00:21:15.659 "seek_hole": false, 00:21:15.659 "seek_data": false, 00:21:15.659 "copy": false, 00:21:15.659 "nvme_iov_md": false 00:21:15.659 }, 00:21:15.659 "driver_specific": { 00:21:15.659 "raid": { 00:21:15.659 "uuid": "0244f082-7f98-4bfe-973f-c6875a7d69f4", 00:21:15.659 "strip_size_kb": 64, 00:21:15.659 "state": "online", 00:21:15.659 "raid_level": "raid5f", 00:21:15.659 "superblock": false, 00:21:15.659 "num_base_bdevs": 4, 00:21:15.659 "num_base_bdevs_discovered": 4, 00:21:15.659 "num_base_bdevs_operational": 4, 00:21:15.659 "base_bdevs_list": [ 00:21:15.659 { 00:21:15.659 "name": "BaseBdev1", 00:21:15.659 "uuid": "f361bc2a-f1d8-414f-806e-ca9d2e4dc0b7", 00:21:15.659 "is_configured": true, 00:21:15.659 "data_offset": 0, 00:21:15.659 "data_size": 65536 00:21:15.659 }, 00:21:15.659 { 00:21:15.659 "name": "BaseBdev2", 00:21:15.659 "uuid": "aa061f13-4ad6-4f84-8d2d-00f235190217", 00:21:15.659 "is_configured": true, 00:21:15.659 "data_offset": 0, 00:21:15.659 "data_size": 65536 00:21:15.659 }, 00:21:15.659 { 00:21:15.659 "name": "BaseBdev3", 00:21:15.659 "uuid": "21bec4a3-99c7-4c1e-9316-fbc23b4f4a81", 00:21:15.659 "is_configured": true, 00:21:15.659 "data_offset": 0, 00:21:15.659 "data_size": 65536 00:21:15.659 }, 00:21:15.659 { 00:21:15.659 "name": "BaseBdev4", 00:21:15.659 "uuid": "17bff079-37fe-4538-9308-fb9478dce43f", 00:21:15.659 "is_configured": true, 00:21:15.659 "data_offset": 0, 00:21:15.659 "data_size": 65536 00:21:15.659 } 00:21:15.659 ] 00:21:15.659 } 00:21:15.659 } 00:21:15.659 }' 00:21:15.659 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:15.659 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:15.659 BaseBdev2 00:21:15.659 BaseBdev3 00:21:15.659 BaseBdev4' 00:21:15.659 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.919 06:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.919 [2024-12-06 06:47:34.540715] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:16.178 06:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.178 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:16.178 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:21:16.178 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:16.178 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:16.178 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:16.178 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:16.178 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:16.178 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:16.178 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:16.178 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:16.178 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:16.178 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:16.178 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:16.178 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:16.178 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:16.178 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.178 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:16.178 06:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.178 06:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.179 06:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.179 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:16.179 "name": "Existed_Raid", 00:21:16.179 "uuid": "0244f082-7f98-4bfe-973f-c6875a7d69f4", 00:21:16.179 "strip_size_kb": 64, 00:21:16.179 "state": "online", 00:21:16.179 "raid_level": "raid5f", 00:21:16.179 "superblock": false, 00:21:16.179 "num_base_bdevs": 4, 00:21:16.179 "num_base_bdevs_discovered": 3, 00:21:16.179 "num_base_bdevs_operational": 3, 00:21:16.179 "base_bdevs_list": [ 00:21:16.179 { 00:21:16.179 "name": null, 00:21:16.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.179 "is_configured": false, 00:21:16.179 "data_offset": 0, 00:21:16.179 "data_size": 65536 00:21:16.179 }, 00:21:16.179 { 00:21:16.179 "name": "BaseBdev2", 00:21:16.179 "uuid": "aa061f13-4ad6-4f84-8d2d-00f235190217", 00:21:16.179 "is_configured": true, 00:21:16.179 "data_offset": 0, 00:21:16.179 "data_size": 65536 00:21:16.179 }, 00:21:16.179 { 00:21:16.179 "name": "BaseBdev3", 00:21:16.179 "uuid": "21bec4a3-99c7-4c1e-9316-fbc23b4f4a81", 00:21:16.179 "is_configured": true, 00:21:16.179 "data_offset": 0, 00:21:16.179 "data_size": 65536 00:21:16.179 }, 00:21:16.179 { 00:21:16.179 "name": "BaseBdev4", 00:21:16.179 "uuid": "17bff079-37fe-4538-9308-fb9478dce43f", 00:21:16.179 "is_configured": true, 00:21:16.179 "data_offset": 0, 00:21:16.179 "data_size": 65536 00:21:16.179 } 00:21:16.179 ] 00:21:16.179 }' 00:21:16.179 06:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:16.179 06:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.745 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:16.745 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:16.745 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.745 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:16.745 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.745 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.745 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.745 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:16.745 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:16.745 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:16.745 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.745 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.745 [2024-12-06 06:47:35.199939] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:16.745 [2024-12-06 06:47:35.200222] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:16.745 [2024-12-06 06:47:35.289398] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:16.745 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.745 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:16.745 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:16.745 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.745 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:16.745 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.745 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.745 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.745 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:16.745 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:16.745 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:16.745 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.745 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.745 [2024-12-06 06:47:35.345483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:17.003 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.003 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:17.003 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:17.003 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.003 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.003 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.003 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:17.003 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.003 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:17.003 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:17.003 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:21:17.003 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.003 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.003 [2024-12-06 06:47:35.500472] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:17.003 [2024-12-06 06:47:35.500695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:17.003 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.003 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:17.003 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:17.003 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.003 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.003 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:17.003 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.003 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.003 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:17.003 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:17.003 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:21:17.003 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:17.003 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:17.003 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:17.003 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.003 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.262 BaseBdev2 00:21:17.262 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.262 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:17.262 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:17.262 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:17.262 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:17.262 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:17.262 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:17.262 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:17.262 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.262 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.262 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.262 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:17.262 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.262 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.262 [ 00:21:17.262 { 00:21:17.262 "name": "BaseBdev2", 00:21:17.262 "aliases": [ 00:21:17.262 "86bed076-48ba-4850-8a98-81d730705080" 00:21:17.262 ], 00:21:17.262 "product_name": "Malloc disk", 00:21:17.262 "block_size": 512, 00:21:17.262 "num_blocks": 65536, 00:21:17.263 "uuid": "86bed076-48ba-4850-8a98-81d730705080", 00:21:17.263 "assigned_rate_limits": { 00:21:17.263 "rw_ios_per_sec": 0, 00:21:17.263 "rw_mbytes_per_sec": 0, 00:21:17.263 "r_mbytes_per_sec": 0, 00:21:17.263 "w_mbytes_per_sec": 0 00:21:17.263 }, 00:21:17.263 "claimed": false, 00:21:17.263 "zoned": false, 00:21:17.263 "supported_io_types": { 00:21:17.263 "read": true, 00:21:17.263 "write": true, 00:21:17.263 "unmap": true, 00:21:17.263 "flush": true, 00:21:17.263 "reset": true, 00:21:17.263 "nvme_admin": false, 00:21:17.263 "nvme_io": false, 00:21:17.263 "nvme_io_md": false, 00:21:17.263 "write_zeroes": true, 00:21:17.263 "zcopy": true, 00:21:17.263 "get_zone_info": false, 00:21:17.263 "zone_management": false, 00:21:17.263 "zone_append": false, 00:21:17.263 "compare": false, 00:21:17.263 "compare_and_write": false, 00:21:17.263 "abort": true, 00:21:17.263 "seek_hole": false, 00:21:17.263 "seek_data": false, 00:21:17.263 "copy": true, 00:21:17.263 "nvme_iov_md": false 00:21:17.263 }, 00:21:17.263 "memory_domains": [ 00:21:17.263 { 00:21:17.263 "dma_device_id": "system", 00:21:17.263 "dma_device_type": 1 00:21:17.263 }, 00:21:17.263 { 00:21:17.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.263 "dma_device_type": 2 00:21:17.263 } 00:21:17.263 ], 00:21:17.263 "driver_specific": {} 00:21:17.263 } 00:21:17.263 ] 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.263 BaseBdev3 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.263 [ 00:21:17.263 { 00:21:17.263 "name": "BaseBdev3", 00:21:17.263 "aliases": [ 00:21:17.263 "89857976-23f8-4ec6-a8ee-5a765cc9f8cb" 00:21:17.263 ], 00:21:17.263 "product_name": "Malloc disk", 00:21:17.263 "block_size": 512, 00:21:17.263 "num_blocks": 65536, 00:21:17.263 "uuid": "89857976-23f8-4ec6-a8ee-5a765cc9f8cb", 00:21:17.263 "assigned_rate_limits": { 00:21:17.263 "rw_ios_per_sec": 0, 00:21:17.263 "rw_mbytes_per_sec": 0, 00:21:17.263 "r_mbytes_per_sec": 0, 00:21:17.263 "w_mbytes_per_sec": 0 00:21:17.263 }, 00:21:17.263 "claimed": false, 00:21:17.263 "zoned": false, 00:21:17.263 "supported_io_types": { 00:21:17.263 "read": true, 00:21:17.263 "write": true, 00:21:17.263 "unmap": true, 00:21:17.263 "flush": true, 00:21:17.263 "reset": true, 00:21:17.263 "nvme_admin": false, 00:21:17.263 "nvme_io": false, 00:21:17.263 "nvme_io_md": false, 00:21:17.263 "write_zeroes": true, 00:21:17.263 "zcopy": true, 00:21:17.263 "get_zone_info": false, 00:21:17.263 "zone_management": false, 00:21:17.263 "zone_append": false, 00:21:17.263 "compare": false, 00:21:17.263 "compare_and_write": false, 00:21:17.263 "abort": true, 00:21:17.263 "seek_hole": false, 00:21:17.263 "seek_data": false, 00:21:17.263 "copy": true, 00:21:17.263 "nvme_iov_md": false 00:21:17.263 }, 00:21:17.263 "memory_domains": [ 00:21:17.263 { 00:21:17.263 "dma_device_id": "system", 00:21:17.263 "dma_device_type": 1 00:21:17.263 }, 00:21:17.263 { 00:21:17.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.263 "dma_device_type": 2 00:21:17.263 } 00:21:17.263 ], 00:21:17.263 "driver_specific": {} 00:21:17.263 } 00:21:17.263 ] 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.263 BaseBdev4 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.263 [ 00:21:17.263 { 00:21:17.263 "name": "BaseBdev4", 00:21:17.263 "aliases": [ 00:21:17.263 "fbe9759d-dd59-4b6f-a207-90f1d651dbb6" 00:21:17.263 ], 00:21:17.263 "product_name": "Malloc disk", 00:21:17.263 "block_size": 512, 00:21:17.263 "num_blocks": 65536, 00:21:17.263 "uuid": "fbe9759d-dd59-4b6f-a207-90f1d651dbb6", 00:21:17.263 "assigned_rate_limits": { 00:21:17.263 "rw_ios_per_sec": 0, 00:21:17.263 "rw_mbytes_per_sec": 0, 00:21:17.263 "r_mbytes_per_sec": 0, 00:21:17.263 "w_mbytes_per_sec": 0 00:21:17.263 }, 00:21:17.263 "claimed": false, 00:21:17.263 "zoned": false, 00:21:17.263 "supported_io_types": { 00:21:17.263 "read": true, 00:21:17.263 "write": true, 00:21:17.263 "unmap": true, 00:21:17.263 "flush": true, 00:21:17.263 "reset": true, 00:21:17.263 "nvme_admin": false, 00:21:17.263 "nvme_io": false, 00:21:17.263 "nvme_io_md": false, 00:21:17.263 "write_zeroes": true, 00:21:17.263 "zcopy": true, 00:21:17.263 "get_zone_info": false, 00:21:17.263 "zone_management": false, 00:21:17.263 "zone_append": false, 00:21:17.263 "compare": false, 00:21:17.263 "compare_and_write": false, 00:21:17.263 "abort": true, 00:21:17.263 "seek_hole": false, 00:21:17.263 "seek_data": false, 00:21:17.263 "copy": true, 00:21:17.263 "nvme_iov_md": false 00:21:17.263 }, 00:21:17.263 "memory_domains": [ 00:21:17.263 { 00:21:17.263 "dma_device_id": "system", 00:21:17.263 "dma_device_type": 1 00:21:17.263 }, 00:21:17.263 { 00:21:17.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.263 "dma_device_type": 2 00:21:17.263 } 00:21:17.263 ], 00:21:17.263 "driver_specific": {} 00:21:17.263 } 00:21:17.263 ] 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.263 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.263 [2024-12-06 06:47:35.884926] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:17.263 [2024-12-06 06:47:35.885121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:17.264 [2024-12-06 06:47:35.885297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:17.264 [2024-12-06 06:47:35.888099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:17.264 [2024-12-06 06:47:35.888317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:17.264 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.264 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:17.264 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:17.264 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:17.264 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:17.264 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:17.264 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:17.264 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.264 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.264 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.264 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.264 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:17.264 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.264 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.264 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.522 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.522 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.522 "name": "Existed_Raid", 00:21:17.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.522 "strip_size_kb": 64, 00:21:17.522 "state": "configuring", 00:21:17.522 "raid_level": "raid5f", 00:21:17.522 "superblock": false, 00:21:17.522 "num_base_bdevs": 4, 00:21:17.522 "num_base_bdevs_discovered": 3, 00:21:17.522 "num_base_bdevs_operational": 4, 00:21:17.522 "base_bdevs_list": [ 00:21:17.522 { 00:21:17.522 "name": "BaseBdev1", 00:21:17.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.522 "is_configured": false, 00:21:17.522 "data_offset": 0, 00:21:17.522 "data_size": 0 00:21:17.522 }, 00:21:17.522 { 00:21:17.522 "name": "BaseBdev2", 00:21:17.522 "uuid": "86bed076-48ba-4850-8a98-81d730705080", 00:21:17.523 "is_configured": true, 00:21:17.523 "data_offset": 0, 00:21:17.523 "data_size": 65536 00:21:17.523 }, 00:21:17.523 { 00:21:17.523 "name": "BaseBdev3", 00:21:17.523 "uuid": "89857976-23f8-4ec6-a8ee-5a765cc9f8cb", 00:21:17.523 "is_configured": true, 00:21:17.523 "data_offset": 0, 00:21:17.523 "data_size": 65536 00:21:17.523 }, 00:21:17.523 { 00:21:17.523 "name": "BaseBdev4", 00:21:17.523 "uuid": "fbe9759d-dd59-4b6f-a207-90f1d651dbb6", 00:21:17.523 "is_configured": true, 00:21:17.523 "data_offset": 0, 00:21:17.523 "data_size": 65536 00:21:17.523 } 00:21:17.523 ] 00:21:17.523 }' 00:21:17.523 06:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.523 06:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.781 06:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:17.781 06:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.781 06:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.781 [2024-12-06 06:47:36.421017] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:17.781 06:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.781 06:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:17.781 06:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:17.781 06:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:17.781 06:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:17.781 06:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:17.781 06:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:17.781 06:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.782 06:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.782 06:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.782 06:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:18.040 06:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.040 06:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:18.040 06:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.040 06:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.040 06:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.040 06:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:18.040 "name": "Existed_Raid", 00:21:18.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.040 "strip_size_kb": 64, 00:21:18.040 "state": "configuring", 00:21:18.040 "raid_level": "raid5f", 00:21:18.040 "superblock": false, 00:21:18.040 "num_base_bdevs": 4, 00:21:18.040 "num_base_bdevs_discovered": 2, 00:21:18.040 "num_base_bdevs_operational": 4, 00:21:18.040 "base_bdevs_list": [ 00:21:18.040 { 00:21:18.040 "name": "BaseBdev1", 00:21:18.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.040 "is_configured": false, 00:21:18.040 "data_offset": 0, 00:21:18.040 "data_size": 0 00:21:18.040 }, 00:21:18.040 { 00:21:18.040 "name": null, 00:21:18.040 "uuid": "86bed076-48ba-4850-8a98-81d730705080", 00:21:18.040 "is_configured": false, 00:21:18.040 "data_offset": 0, 00:21:18.040 "data_size": 65536 00:21:18.040 }, 00:21:18.040 { 00:21:18.040 "name": "BaseBdev3", 00:21:18.040 "uuid": "89857976-23f8-4ec6-a8ee-5a765cc9f8cb", 00:21:18.040 "is_configured": true, 00:21:18.040 "data_offset": 0, 00:21:18.040 "data_size": 65536 00:21:18.040 }, 00:21:18.040 { 00:21:18.040 "name": "BaseBdev4", 00:21:18.040 "uuid": "fbe9759d-dd59-4b6f-a207-90f1d651dbb6", 00:21:18.040 "is_configured": true, 00:21:18.040 "data_offset": 0, 00:21:18.040 "data_size": 65536 00:21:18.040 } 00:21:18.040 ] 00:21:18.040 }' 00:21:18.040 06:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:18.040 06:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.300 06:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:18.300 06:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.300 06:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.300 06:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.560 06:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.560 06:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:18.560 06:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:18.560 06:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.560 06:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.560 [2024-12-06 06:47:37.035762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:18.560 BaseBdev1 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.560 [ 00:21:18.560 { 00:21:18.560 "name": "BaseBdev1", 00:21:18.560 "aliases": [ 00:21:18.560 "d9f8aa78-5277-4d5a-91b0-673accf12041" 00:21:18.560 ], 00:21:18.560 "product_name": "Malloc disk", 00:21:18.560 "block_size": 512, 00:21:18.560 "num_blocks": 65536, 00:21:18.560 "uuid": "d9f8aa78-5277-4d5a-91b0-673accf12041", 00:21:18.560 "assigned_rate_limits": { 00:21:18.560 "rw_ios_per_sec": 0, 00:21:18.560 "rw_mbytes_per_sec": 0, 00:21:18.560 "r_mbytes_per_sec": 0, 00:21:18.560 "w_mbytes_per_sec": 0 00:21:18.560 }, 00:21:18.560 "claimed": true, 00:21:18.560 "claim_type": "exclusive_write", 00:21:18.560 "zoned": false, 00:21:18.560 "supported_io_types": { 00:21:18.560 "read": true, 00:21:18.560 "write": true, 00:21:18.560 "unmap": true, 00:21:18.560 "flush": true, 00:21:18.560 "reset": true, 00:21:18.560 "nvme_admin": false, 00:21:18.560 "nvme_io": false, 00:21:18.560 "nvme_io_md": false, 00:21:18.560 "write_zeroes": true, 00:21:18.560 "zcopy": true, 00:21:18.560 "get_zone_info": false, 00:21:18.560 "zone_management": false, 00:21:18.560 "zone_append": false, 00:21:18.560 "compare": false, 00:21:18.560 "compare_and_write": false, 00:21:18.560 "abort": true, 00:21:18.560 "seek_hole": false, 00:21:18.560 "seek_data": false, 00:21:18.560 "copy": true, 00:21:18.560 "nvme_iov_md": false 00:21:18.560 }, 00:21:18.560 "memory_domains": [ 00:21:18.560 { 00:21:18.560 "dma_device_id": "system", 00:21:18.560 "dma_device_type": 1 00:21:18.560 }, 00:21:18.560 { 00:21:18.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.560 "dma_device_type": 2 00:21:18.560 } 00:21:18.560 ], 00:21:18.560 "driver_specific": {} 00:21:18.560 } 00:21:18.560 ] 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.560 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:18.560 "name": "Existed_Raid", 00:21:18.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.560 "strip_size_kb": 64, 00:21:18.560 "state": "configuring", 00:21:18.560 "raid_level": "raid5f", 00:21:18.560 "superblock": false, 00:21:18.560 "num_base_bdevs": 4, 00:21:18.561 "num_base_bdevs_discovered": 3, 00:21:18.561 "num_base_bdevs_operational": 4, 00:21:18.561 "base_bdevs_list": [ 00:21:18.561 { 00:21:18.561 "name": "BaseBdev1", 00:21:18.561 "uuid": "d9f8aa78-5277-4d5a-91b0-673accf12041", 00:21:18.561 "is_configured": true, 00:21:18.561 "data_offset": 0, 00:21:18.561 "data_size": 65536 00:21:18.561 }, 00:21:18.561 { 00:21:18.561 "name": null, 00:21:18.561 "uuid": "86bed076-48ba-4850-8a98-81d730705080", 00:21:18.561 "is_configured": false, 00:21:18.561 "data_offset": 0, 00:21:18.561 "data_size": 65536 00:21:18.561 }, 00:21:18.561 { 00:21:18.561 "name": "BaseBdev3", 00:21:18.561 "uuid": "89857976-23f8-4ec6-a8ee-5a765cc9f8cb", 00:21:18.561 "is_configured": true, 00:21:18.561 "data_offset": 0, 00:21:18.561 "data_size": 65536 00:21:18.561 }, 00:21:18.561 { 00:21:18.561 "name": "BaseBdev4", 00:21:18.561 "uuid": "fbe9759d-dd59-4b6f-a207-90f1d651dbb6", 00:21:18.561 "is_configured": true, 00:21:18.561 "data_offset": 0, 00:21:18.561 "data_size": 65536 00:21:18.561 } 00:21:18.561 ] 00:21:18.561 }' 00:21:18.561 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:18.561 06:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.130 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.130 06:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.130 06:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.130 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:19.130 06:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.130 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:21:19.130 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:21:19.130 06:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.130 06:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.130 [2024-12-06 06:47:37.660063] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:19.130 06:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.130 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:19.130 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:19.130 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:19.130 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:19.130 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:19.130 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:19.130 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:19.130 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:19.130 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:19.130 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:19.130 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.130 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:19.130 06:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.130 06:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.130 06:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.130 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:19.130 "name": "Existed_Raid", 00:21:19.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.130 "strip_size_kb": 64, 00:21:19.130 "state": "configuring", 00:21:19.130 "raid_level": "raid5f", 00:21:19.130 "superblock": false, 00:21:19.130 "num_base_bdevs": 4, 00:21:19.130 "num_base_bdevs_discovered": 2, 00:21:19.130 "num_base_bdevs_operational": 4, 00:21:19.130 "base_bdevs_list": [ 00:21:19.130 { 00:21:19.130 "name": "BaseBdev1", 00:21:19.130 "uuid": "d9f8aa78-5277-4d5a-91b0-673accf12041", 00:21:19.130 "is_configured": true, 00:21:19.130 "data_offset": 0, 00:21:19.130 "data_size": 65536 00:21:19.130 }, 00:21:19.130 { 00:21:19.130 "name": null, 00:21:19.130 "uuid": "86bed076-48ba-4850-8a98-81d730705080", 00:21:19.130 "is_configured": false, 00:21:19.130 "data_offset": 0, 00:21:19.130 "data_size": 65536 00:21:19.130 }, 00:21:19.130 { 00:21:19.130 "name": null, 00:21:19.130 "uuid": "89857976-23f8-4ec6-a8ee-5a765cc9f8cb", 00:21:19.130 "is_configured": false, 00:21:19.130 "data_offset": 0, 00:21:19.130 "data_size": 65536 00:21:19.130 }, 00:21:19.130 { 00:21:19.130 "name": "BaseBdev4", 00:21:19.130 "uuid": "fbe9759d-dd59-4b6f-a207-90f1d651dbb6", 00:21:19.130 "is_configured": true, 00:21:19.130 "data_offset": 0, 00:21:19.130 "data_size": 65536 00:21:19.130 } 00:21:19.130 ] 00:21:19.130 }' 00:21:19.130 06:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:19.130 06:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.698 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.698 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:19.698 06:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.698 06:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.698 06:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.698 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:21:19.698 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:19.698 06:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.698 06:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.698 [2024-12-06 06:47:38.272369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:19.698 06:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.698 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:19.698 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:19.698 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:19.698 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:19.698 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:19.698 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:19.698 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:19.698 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:19.698 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:19.698 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:19.698 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.698 06:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.698 06:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.698 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:19.698 06:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.698 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:19.698 "name": "Existed_Raid", 00:21:19.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.698 "strip_size_kb": 64, 00:21:19.698 "state": "configuring", 00:21:19.698 "raid_level": "raid5f", 00:21:19.698 "superblock": false, 00:21:19.698 "num_base_bdevs": 4, 00:21:19.698 "num_base_bdevs_discovered": 3, 00:21:19.698 "num_base_bdevs_operational": 4, 00:21:19.698 "base_bdevs_list": [ 00:21:19.698 { 00:21:19.698 "name": "BaseBdev1", 00:21:19.698 "uuid": "d9f8aa78-5277-4d5a-91b0-673accf12041", 00:21:19.698 "is_configured": true, 00:21:19.698 "data_offset": 0, 00:21:19.698 "data_size": 65536 00:21:19.698 }, 00:21:19.698 { 00:21:19.698 "name": null, 00:21:19.698 "uuid": "86bed076-48ba-4850-8a98-81d730705080", 00:21:19.698 "is_configured": false, 00:21:19.698 "data_offset": 0, 00:21:19.698 "data_size": 65536 00:21:19.698 }, 00:21:19.698 { 00:21:19.698 "name": "BaseBdev3", 00:21:19.698 "uuid": "89857976-23f8-4ec6-a8ee-5a765cc9f8cb", 00:21:19.698 "is_configured": true, 00:21:19.698 "data_offset": 0, 00:21:19.698 "data_size": 65536 00:21:19.698 }, 00:21:19.698 { 00:21:19.698 "name": "BaseBdev4", 00:21:19.698 "uuid": "fbe9759d-dd59-4b6f-a207-90f1d651dbb6", 00:21:19.698 "is_configured": true, 00:21:19.698 "data_offset": 0, 00:21:19.698 "data_size": 65536 00:21:19.698 } 00:21:19.698 ] 00:21:19.698 }' 00:21:19.698 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:19.698 06:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.307 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:20.307 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.307 06:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.307 06:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.307 06:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.307 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:21:20.308 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:20.308 06:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.308 06:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.308 [2024-12-06 06:47:38.828589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:20.308 06:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.308 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:20.308 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:20.308 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:20.308 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:20.308 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:20.308 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:20.308 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:20.308 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:20.308 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:20.308 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:20.308 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.308 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:20.308 06:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.308 06:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.308 06:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.566 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:20.566 "name": "Existed_Raid", 00:21:20.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.567 "strip_size_kb": 64, 00:21:20.567 "state": "configuring", 00:21:20.567 "raid_level": "raid5f", 00:21:20.567 "superblock": false, 00:21:20.567 "num_base_bdevs": 4, 00:21:20.567 "num_base_bdevs_discovered": 2, 00:21:20.567 "num_base_bdevs_operational": 4, 00:21:20.567 "base_bdevs_list": [ 00:21:20.567 { 00:21:20.567 "name": null, 00:21:20.567 "uuid": "d9f8aa78-5277-4d5a-91b0-673accf12041", 00:21:20.567 "is_configured": false, 00:21:20.567 "data_offset": 0, 00:21:20.567 "data_size": 65536 00:21:20.567 }, 00:21:20.567 { 00:21:20.567 "name": null, 00:21:20.567 "uuid": "86bed076-48ba-4850-8a98-81d730705080", 00:21:20.567 "is_configured": false, 00:21:20.567 "data_offset": 0, 00:21:20.567 "data_size": 65536 00:21:20.567 }, 00:21:20.567 { 00:21:20.567 "name": "BaseBdev3", 00:21:20.567 "uuid": "89857976-23f8-4ec6-a8ee-5a765cc9f8cb", 00:21:20.567 "is_configured": true, 00:21:20.567 "data_offset": 0, 00:21:20.567 "data_size": 65536 00:21:20.567 }, 00:21:20.567 { 00:21:20.567 "name": "BaseBdev4", 00:21:20.567 "uuid": "fbe9759d-dd59-4b6f-a207-90f1d651dbb6", 00:21:20.567 "is_configured": true, 00:21:20.567 "data_offset": 0, 00:21:20.567 "data_size": 65536 00:21:20.567 } 00:21:20.567 ] 00:21:20.567 }' 00:21:20.567 06:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:20.567 06:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.825 06:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.825 06:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:20.825 06:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.825 06:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.084 06:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.084 06:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:21:21.084 06:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:21.084 06:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.084 06:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.084 [2024-12-06 06:47:39.507132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:21.084 06:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.084 06:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:21.084 06:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:21.084 06:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:21.084 06:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:21.084 06:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:21.084 06:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:21.084 06:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:21.084 06:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:21.084 06:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:21.084 06:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:21.084 06:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.084 06:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.084 06:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:21.084 06:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.084 06:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.084 06:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:21.084 "name": "Existed_Raid", 00:21:21.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.084 "strip_size_kb": 64, 00:21:21.084 "state": "configuring", 00:21:21.084 "raid_level": "raid5f", 00:21:21.084 "superblock": false, 00:21:21.084 "num_base_bdevs": 4, 00:21:21.084 "num_base_bdevs_discovered": 3, 00:21:21.084 "num_base_bdevs_operational": 4, 00:21:21.084 "base_bdevs_list": [ 00:21:21.084 { 00:21:21.084 "name": null, 00:21:21.084 "uuid": "d9f8aa78-5277-4d5a-91b0-673accf12041", 00:21:21.084 "is_configured": false, 00:21:21.084 "data_offset": 0, 00:21:21.084 "data_size": 65536 00:21:21.084 }, 00:21:21.084 { 00:21:21.084 "name": "BaseBdev2", 00:21:21.084 "uuid": "86bed076-48ba-4850-8a98-81d730705080", 00:21:21.084 "is_configured": true, 00:21:21.084 "data_offset": 0, 00:21:21.084 "data_size": 65536 00:21:21.084 }, 00:21:21.084 { 00:21:21.084 "name": "BaseBdev3", 00:21:21.084 "uuid": "89857976-23f8-4ec6-a8ee-5a765cc9f8cb", 00:21:21.084 "is_configured": true, 00:21:21.084 "data_offset": 0, 00:21:21.084 "data_size": 65536 00:21:21.084 }, 00:21:21.084 { 00:21:21.084 "name": "BaseBdev4", 00:21:21.084 "uuid": "fbe9759d-dd59-4b6f-a207-90f1d651dbb6", 00:21:21.084 "is_configured": true, 00:21:21.084 "data_offset": 0, 00:21:21.084 "data_size": 65536 00:21:21.084 } 00:21:21.084 ] 00:21:21.084 }' 00:21:21.084 06:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:21.084 06:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d9f8aa78-5277-4d5a-91b0-673accf12041 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.653 [2024-12-06 06:47:40.177871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:21.653 [2024-12-06 06:47:40.178148] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:21.653 [2024-12-06 06:47:40.178173] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:21:21.653 [2024-12-06 06:47:40.178546] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:21.653 [2024-12-06 06:47:40.184956] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:21.653 [2024-12-06 06:47:40.184988] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:21:21.653 [2024-12-06 06:47:40.185404] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.653 NewBaseBdev 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.653 [ 00:21:21.653 { 00:21:21.653 "name": "NewBaseBdev", 00:21:21.653 "aliases": [ 00:21:21.653 "d9f8aa78-5277-4d5a-91b0-673accf12041" 00:21:21.653 ], 00:21:21.653 "product_name": "Malloc disk", 00:21:21.653 "block_size": 512, 00:21:21.653 "num_blocks": 65536, 00:21:21.653 "uuid": "d9f8aa78-5277-4d5a-91b0-673accf12041", 00:21:21.653 "assigned_rate_limits": { 00:21:21.653 "rw_ios_per_sec": 0, 00:21:21.653 "rw_mbytes_per_sec": 0, 00:21:21.653 "r_mbytes_per_sec": 0, 00:21:21.653 "w_mbytes_per_sec": 0 00:21:21.653 }, 00:21:21.653 "claimed": true, 00:21:21.653 "claim_type": "exclusive_write", 00:21:21.653 "zoned": false, 00:21:21.653 "supported_io_types": { 00:21:21.653 "read": true, 00:21:21.653 "write": true, 00:21:21.653 "unmap": true, 00:21:21.653 "flush": true, 00:21:21.653 "reset": true, 00:21:21.653 "nvme_admin": false, 00:21:21.653 "nvme_io": false, 00:21:21.653 "nvme_io_md": false, 00:21:21.653 "write_zeroes": true, 00:21:21.653 "zcopy": true, 00:21:21.653 "get_zone_info": false, 00:21:21.653 "zone_management": false, 00:21:21.653 "zone_append": false, 00:21:21.653 "compare": false, 00:21:21.653 "compare_and_write": false, 00:21:21.653 "abort": true, 00:21:21.653 "seek_hole": false, 00:21:21.653 "seek_data": false, 00:21:21.653 "copy": true, 00:21:21.653 "nvme_iov_md": false 00:21:21.653 }, 00:21:21.653 "memory_domains": [ 00:21:21.653 { 00:21:21.653 "dma_device_id": "system", 00:21:21.653 "dma_device_type": 1 00:21:21.653 }, 00:21:21.653 { 00:21:21.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:21.653 "dma_device_type": 2 00:21:21.653 } 00:21:21.653 ], 00:21:21.653 "driver_specific": {} 00:21:21.653 } 00:21:21.653 ] 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.653 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:21.653 "name": "Existed_Raid", 00:21:21.653 "uuid": "0d2fd178-c4dc-4bec-9fb5-ed1955a0726c", 00:21:21.653 "strip_size_kb": 64, 00:21:21.653 "state": "online", 00:21:21.653 "raid_level": "raid5f", 00:21:21.653 "superblock": false, 00:21:21.653 "num_base_bdevs": 4, 00:21:21.653 "num_base_bdevs_discovered": 4, 00:21:21.653 "num_base_bdevs_operational": 4, 00:21:21.653 "base_bdevs_list": [ 00:21:21.653 { 00:21:21.653 "name": "NewBaseBdev", 00:21:21.653 "uuid": "d9f8aa78-5277-4d5a-91b0-673accf12041", 00:21:21.653 "is_configured": true, 00:21:21.653 "data_offset": 0, 00:21:21.653 "data_size": 65536 00:21:21.653 }, 00:21:21.653 { 00:21:21.653 "name": "BaseBdev2", 00:21:21.653 "uuid": "86bed076-48ba-4850-8a98-81d730705080", 00:21:21.653 "is_configured": true, 00:21:21.653 "data_offset": 0, 00:21:21.653 "data_size": 65536 00:21:21.653 }, 00:21:21.653 { 00:21:21.653 "name": "BaseBdev3", 00:21:21.653 "uuid": "89857976-23f8-4ec6-a8ee-5a765cc9f8cb", 00:21:21.654 "is_configured": true, 00:21:21.654 "data_offset": 0, 00:21:21.654 "data_size": 65536 00:21:21.654 }, 00:21:21.654 { 00:21:21.654 "name": "BaseBdev4", 00:21:21.654 "uuid": "fbe9759d-dd59-4b6f-a207-90f1d651dbb6", 00:21:21.654 "is_configured": true, 00:21:21.654 "data_offset": 0, 00:21:21.654 "data_size": 65536 00:21:21.654 } 00:21:21.654 ] 00:21:21.654 }' 00:21:21.654 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:21.654 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.219 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:21:22.219 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:22.219 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:22.219 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:22.219 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:22.219 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:22.219 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:22.219 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:22.219 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.219 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.219 [2024-12-06 06:47:40.757482] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:22.219 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.219 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:22.219 "name": "Existed_Raid", 00:21:22.219 "aliases": [ 00:21:22.219 "0d2fd178-c4dc-4bec-9fb5-ed1955a0726c" 00:21:22.219 ], 00:21:22.219 "product_name": "Raid Volume", 00:21:22.219 "block_size": 512, 00:21:22.219 "num_blocks": 196608, 00:21:22.219 "uuid": "0d2fd178-c4dc-4bec-9fb5-ed1955a0726c", 00:21:22.219 "assigned_rate_limits": { 00:21:22.219 "rw_ios_per_sec": 0, 00:21:22.219 "rw_mbytes_per_sec": 0, 00:21:22.219 "r_mbytes_per_sec": 0, 00:21:22.219 "w_mbytes_per_sec": 0 00:21:22.219 }, 00:21:22.219 "claimed": false, 00:21:22.219 "zoned": false, 00:21:22.219 "supported_io_types": { 00:21:22.219 "read": true, 00:21:22.219 "write": true, 00:21:22.219 "unmap": false, 00:21:22.219 "flush": false, 00:21:22.219 "reset": true, 00:21:22.219 "nvme_admin": false, 00:21:22.219 "nvme_io": false, 00:21:22.219 "nvme_io_md": false, 00:21:22.219 "write_zeroes": true, 00:21:22.219 "zcopy": false, 00:21:22.219 "get_zone_info": false, 00:21:22.219 "zone_management": false, 00:21:22.219 "zone_append": false, 00:21:22.219 "compare": false, 00:21:22.219 "compare_and_write": false, 00:21:22.219 "abort": false, 00:21:22.219 "seek_hole": false, 00:21:22.219 "seek_data": false, 00:21:22.219 "copy": false, 00:21:22.219 "nvme_iov_md": false 00:21:22.219 }, 00:21:22.219 "driver_specific": { 00:21:22.219 "raid": { 00:21:22.219 "uuid": "0d2fd178-c4dc-4bec-9fb5-ed1955a0726c", 00:21:22.219 "strip_size_kb": 64, 00:21:22.219 "state": "online", 00:21:22.219 "raid_level": "raid5f", 00:21:22.219 "superblock": false, 00:21:22.219 "num_base_bdevs": 4, 00:21:22.219 "num_base_bdevs_discovered": 4, 00:21:22.219 "num_base_bdevs_operational": 4, 00:21:22.219 "base_bdevs_list": [ 00:21:22.219 { 00:21:22.219 "name": "NewBaseBdev", 00:21:22.219 "uuid": "d9f8aa78-5277-4d5a-91b0-673accf12041", 00:21:22.219 "is_configured": true, 00:21:22.219 "data_offset": 0, 00:21:22.219 "data_size": 65536 00:21:22.219 }, 00:21:22.219 { 00:21:22.219 "name": "BaseBdev2", 00:21:22.219 "uuid": "86bed076-48ba-4850-8a98-81d730705080", 00:21:22.219 "is_configured": true, 00:21:22.219 "data_offset": 0, 00:21:22.219 "data_size": 65536 00:21:22.219 }, 00:21:22.219 { 00:21:22.219 "name": "BaseBdev3", 00:21:22.219 "uuid": "89857976-23f8-4ec6-a8ee-5a765cc9f8cb", 00:21:22.219 "is_configured": true, 00:21:22.219 "data_offset": 0, 00:21:22.219 "data_size": 65536 00:21:22.219 }, 00:21:22.219 { 00:21:22.219 "name": "BaseBdev4", 00:21:22.219 "uuid": "fbe9759d-dd59-4b6f-a207-90f1d651dbb6", 00:21:22.219 "is_configured": true, 00:21:22.219 "data_offset": 0, 00:21:22.219 "data_size": 65536 00:21:22.219 } 00:21:22.219 ] 00:21:22.219 } 00:21:22.219 } 00:21:22.219 }' 00:21:22.219 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:22.219 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:21:22.219 BaseBdev2 00:21:22.219 BaseBdev3 00:21:22.219 BaseBdev4' 00:21:22.219 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:22.477 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:22.477 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:22.477 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:21:22.477 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:22.477 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.477 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.477 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.477 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:22.477 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:22.477 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:22.477 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:22.477 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.477 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.477 06:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:22.477 06:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.477 06:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:22.477 06:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:22.477 06:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:22.477 06:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:22.477 06:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:22.477 06:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.477 06:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.477 06:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.477 06:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:22.477 06:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:22.477 06:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:22.477 06:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:22.478 06:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.478 06:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.478 06:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:22.478 06:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.478 06:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:22.478 06:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:22.478 06:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:22.478 06:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.478 06:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.478 [2024-12-06 06:47:41.121306] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:22.478 [2024-12-06 06:47:41.121477] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:22.478 [2024-12-06 06:47:41.121799] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:22.736 [2024-12-06 06:47:41.122296] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:22.736 [2024-12-06 06:47:41.122332] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:22.736 06:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.736 06:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83336 00:21:22.736 06:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83336 ']' 00:21:22.736 06:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83336 00:21:22.736 06:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:21:22.736 06:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:22.736 06:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83336 00:21:22.736 killing process with pid 83336 00:21:22.736 06:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:22.736 06:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:22.736 06:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83336' 00:21:22.736 06:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83336 00:21:22.736 [2024-12-06 06:47:41.156731] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:22.736 06:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83336 00:21:22.995 [2024-12-06 06:47:41.525126] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:21:24.378 00:21:24.378 real 0m13.126s 00:21:24.378 user 0m21.692s 00:21:24.378 sys 0m1.893s 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:24.378 ************************************ 00:21:24.378 END TEST raid5f_state_function_test 00:21:24.378 ************************************ 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.378 06:47:42 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:21:24.378 06:47:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:24.378 06:47:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:24.378 06:47:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:24.378 ************************************ 00:21:24.378 START TEST raid5f_state_function_test_sb 00:21:24.378 ************************************ 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:24.378 Process raid pid: 84022 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84022 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84022' 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84022 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84022 ']' 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:24.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:24.378 06:47:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.378 [2024-12-06 06:47:42.820259] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:21:24.378 [2024-12-06 06:47:42.820452] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.378 [2024-12-06 06:47:43.008271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.646 [2024-12-06 06:47:43.176378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.905 [2024-12-06 06:47:43.389495] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:24.905 [2024-12-06 06:47:43.389564] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:25.473 06:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:25.473 06:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:21:25.473 06:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:25.473 06:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.473 06:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.473 [2024-12-06 06:47:43.914144] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:25.473 [2024-12-06 06:47:43.914951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:25.473 [2024-12-06 06:47:43.915117] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:25.473 [2024-12-06 06:47:43.915276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:25.473 [2024-12-06 06:47:43.915429] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:25.473 [2024-12-06 06:47:43.915600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:25.473 [2024-12-06 06:47:43.915761] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:25.473 [2024-12-06 06:47:43.915912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:25.473 06:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.473 06:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:25.473 06:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:25.473 06:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:25.473 06:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:25.473 06:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:25.473 06:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:25.473 06:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:25.473 06:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:25.473 06:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:25.473 06:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:25.473 06:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.473 06:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:25.473 06:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.473 06:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.473 06:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.473 06:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:25.473 "name": "Existed_Raid", 00:21:25.473 "uuid": "27ad6699-9377-442b-b1e8-a495c1ed95e5", 00:21:25.473 "strip_size_kb": 64, 00:21:25.473 "state": "configuring", 00:21:25.473 "raid_level": "raid5f", 00:21:25.473 "superblock": true, 00:21:25.473 "num_base_bdevs": 4, 00:21:25.473 "num_base_bdevs_discovered": 0, 00:21:25.473 "num_base_bdevs_operational": 4, 00:21:25.473 "base_bdevs_list": [ 00:21:25.473 { 00:21:25.473 "name": "BaseBdev1", 00:21:25.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.473 "is_configured": false, 00:21:25.473 "data_offset": 0, 00:21:25.473 "data_size": 0 00:21:25.473 }, 00:21:25.473 { 00:21:25.473 "name": "BaseBdev2", 00:21:25.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.473 "is_configured": false, 00:21:25.473 "data_offset": 0, 00:21:25.473 "data_size": 0 00:21:25.473 }, 00:21:25.473 { 00:21:25.473 "name": "BaseBdev3", 00:21:25.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.473 "is_configured": false, 00:21:25.473 "data_offset": 0, 00:21:25.473 "data_size": 0 00:21:25.473 }, 00:21:25.473 { 00:21:25.473 "name": "BaseBdev4", 00:21:25.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.473 "is_configured": false, 00:21:25.473 "data_offset": 0, 00:21:25.473 "data_size": 0 00:21:25.473 } 00:21:25.473 ] 00:21:25.473 }' 00:21:25.473 06:47:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:25.473 06:47:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.040 [2024-12-06 06:47:44.458190] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:26.040 [2024-12-06 06:47:44.458240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.040 [2024-12-06 06:47:44.470189] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:26.040 [2024-12-06 06:47:44.470565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:26.040 [2024-12-06 06:47:44.470720] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:26.040 [2024-12-06 06:47:44.470759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:26.040 [2024-12-06 06:47:44.470772] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:26.040 [2024-12-06 06:47:44.470789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:26.040 [2024-12-06 06:47:44.470799] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:26.040 [2024-12-06 06:47:44.470813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.040 [2024-12-06 06:47:44.516259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:26.040 BaseBdev1 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.040 [ 00:21:26.040 { 00:21:26.040 "name": "BaseBdev1", 00:21:26.040 "aliases": [ 00:21:26.040 "b71f2234-3ee3-4537-a992-062894b40c46" 00:21:26.040 ], 00:21:26.040 "product_name": "Malloc disk", 00:21:26.040 "block_size": 512, 00:21:26.040 "num_blocks": 65536, 00:21:26.040 "uuid": "b71f2234-3ee3-4537-a992-062894b40c46", 00:21:26.040 "assigned_rate_limits": { 00:21:26.040 "rw_ios_per_sec": 0, 00:21:26.040 "rw_mbytes_per_sec": 0, 00:21:26.040 "r_mbytes_per_sec": 0, 00:21:26.040 "w_mbytes_per_sec": 0 00:21:26.040 }, 00:21:26.040 "claimed": true, 00:21:26.040 "claim_type": "exclusive_write", 00:21:26.040 "zoned": false, 00:21:26.040 "supported_io_types": { 00:21:26.040 "read": true, 00:21:26.040 "write": true, 00:21:26.040 "unmap": true, 00:21:26.040 "flush": true, 00:21:26.040 "reset": true, 00:21:26.040 "nvme_admin": false, 00:21:26.040 "nvme_io": false, 00:21:26.040 "nvme_io_md": false, 00:21:26.040 "write_zeroes": true, 00:21:26.040 "zcopy": true, 00:21:26.040 "get_zone_info": false, 00:21:26.040 "zone_management": false, 00:21:26.040 "zone_append": false, 00:21:26.040 "compare": false, 00:21:26.040 "compare_and_write": false, 00:21:26.040 "abort": true, 00:21:26.040 "seek_hole": false, 00:21:26.040 "seek_data": false, 00:21:26.040 "copy": true, 00:21:26.040 "nvme_iov_md": false 00:21:26.040 }, 00:21:26.040 "memory_domains": [ 00:21:26.040 { 00:21:26.040 "dma_device_id": "system", 00:21:26.040 "dma_device_type": 1 00:21:26.040 }, 00:21:26.040 { 00:21:26.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.040 "dma_device_type": 2 00:21:26.040 } 00:21:26.040 ], 00:21:26.040 "driver_specific": {} 00:21:26.040 } 00:21:26.040 ] 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.040 06:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.041 06:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.041 06:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:26.041 "name": "Existed_Raid", 00:21:26.041 "uuid": "0baf3196-ca46-4e01-b6bd-00bab855c736", 00:21:26.041 "strip_size_kb": 64, 00:21:26.041 "state": "configuring", 00:21:26.041 "raid_level": "raid5f", 00:21:26.041 "superblock": true, 00:21:26.041 "num_base_bdevs": 4, 00:21:26.041 "num_base_bdevs_discovered": 1, 00:21:26.041 "num_base_bdevs_operational": 4, 00:21:26.041 "base_bdevs_list": [ 00:21:26.041 { 00:21:26.041 "name": "BaseBdev1", 00:21:26.041 "uuid": "b71f2234-3ee3-4537-a992-062894b40c46", 00:21:26.041 "is_configured": true, 00:21:26.041 "data_offset": 2048, 00:21:26.041 "data_size": 63488 00:21:26.041 }, 00:21:26.041 { 00:21:26.041 "name": "BaseBdev2", 00:21:26.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.041 "is_configured": false, 00:21:26.041 "data_offset": 0, 00:21:26.041 "data_size": 0 00:21:26.041 }, 00:21:26.041 { 00:21:26.041 "name": "BaseBdev3", 00:21:26.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.041 "is_configured": false, 00:21:26.041 "data_offset": 0, 00:21:26.041 "data_size": 0 00:21:26.041 }, 00:21:26.041 { 00:21:26.041 "name": "BaseBdev4", 00:21:26.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.041 "is_configured": false, 00:21:26.041 "data_offset": 0, 00:21:26.041 "data_size": 0 00:21:26.041 } 00:21:26.041 ] 00:21:26.041 }' 00:21:26.041 06:47:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:26.041 06:47:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.608 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:26.608 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.608 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.608 [2024-12-06 06:47:45.104485] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:26.608 [2024-12-06 06:47:45.104572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:26.608 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.608 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:26.608 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.608 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.608 [2024-12-06 06:47:45.112585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:26.608 [2024-12-06 06:47:45.115304] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:26.608 [2024-12-06 06:47:45.115949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:26.608 [2024-12-06 06:47:45.116096] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:26.608 [2024-12-06 06:47:45.116244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:26.608 [2024-12-06 06:47:45.116369] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:26.608 [2024-12-06 06:47:45.116515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:26.608 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.608 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:26.608 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:26.608 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:26.609 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:26.609 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:26.609 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:26.609 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:26.609 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:26.609 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:26.609 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:26.609 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:26.609 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:26.609 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.609 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:26.609 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.609 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.609 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.609 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:26.609 "name": "Existed_Raid", 00:21:26.609 "uuid": "90764c2a-59b5-4bbc-9ae2-7d633844a673", 00:21:26.609 "strip_size_kb": 64, 00:21:26.609 "state": "configuring", 00:21:26.609 "raid_level": "raid5f", 00:21:26.609 "superblock": true, 00:21:26.609 "num_base_bdevs": 4, 00:21:26.609 "num_base_bdevs_discovered": 1, 00:21:26.609 "num_base_bdevs_operational": 4, 00:21:26.609 "base_bdevs_list": [ 00:21:26.609 { 00:21:26.609 "name": "BaseBdev1", 00:21:26.609 "uuid": "b71f2234-3ee3-4537-a992-062894b40c46", 00:21:26.609 "is_configured": true, 00:21:26.609 "data_offset": 2048, 00:21:26.609 "data_size": 63488 00:21:26.609 }, 00:21:26.609 { 00:21:26.609 "name": "BaseBdev2", 00:21:26.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.609 "is_configured": false, 00:21:26.609 "data_offset": 0, 00:21:26.609 "data_size": 0 00:21:26.609 }, 00:21:26.609 { 00:21:26.609 "name": "BaseBdev3", 00:21:26.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.609 "is_configured": false, 00:21:26.609 "data_offset": 0, 00:21:26.609 "data_size": 0 00:21:26.609 }, 00:21:26.609 { 00:21:26.609 "name": "BaseBdev4", 00:21:26.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.609 "is_configured": false, 00:21:26.609 "data_offset": 0, 00:21:26.609 "data_size": 0 00:21:26.609 } 00:21:26.609 ] 00:21:26.609 }' 00:21:26.609 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:26.609 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.175 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:27.175 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.175 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.175 BaseBdev2 00:21:27.175 [2024-12-06 06:47:45.652024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:27.175 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.175 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:27.175 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:27.175 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:27.175 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:27.175 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:27.175 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:27.175 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:27.176 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.176 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.176 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.176 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:27.176 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.176 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.176 [ 00:21:27.176 { 00:21:27.176 "name": "BaseBdev2", 00:21:27.176 "aliases": [ 00:21:27.176 "0604c520-87e4-4c0c-8c4a-610f010e4fc9" 00:21:27.176 ], 00:21:27.176 "product_name": "Malloc disk", 00:21:27.176 "block_size": 512, 00:21:27.176 "num_blocks": 65536, 00:21:27.176 "uuid": "0604c520-87e4-4c0c-8c4a-610f010e4fc9", 00:21:27.176 "assigned_rate_limits": { 00:21:27.176 "rw_ios_per_sec": 0, 00:21:27.176 "rw_mbytes_per_sec": 0, 00:21:27.176 "r_mbytes_per_sec": 0, 00:21:27.176 "w_mbytes_per_sec": 0 00:21:27.176 }, 00:21:27.176 "claimed": true, 00:21:27.176 "claim_type": "exclusive_write", 00:21:27.176 "zoned": false, 00:21:27.176 "supported_io_types": { 00:21:27.176 "read": true, 00:21:27.176 "write": true, 00:21:27.176 "unmap": true, 00:21:27.176 "flush": true, 00:21:27.176 "reset": true, 00:21:27.176 "nvme_admin": false, 00:21:27.176 "nvme_io": false, 00:21:27.176 "nvme_io_md": false, 00:21:27.176 "write_zeroes": true, 00:21:27.176 "zcopy": true, 00:21:27.176 "get_zone_info": false, 00:21:27.176 "zone_management": false, 00:21:27.176 "zone_append": false, 00:21:27.176 "compare": false, 00:21:27.176 "compare_and_write": false, 00:21:27.176 "abort": true, 00:21:27.176 "seek_hole": false, 00:21:27.176 "seek_data": false, 00:21:27.176 "copy": true, 00:21:27.176 "nvme_iov_md": false 00:21:27.176 }, 00:21:27.176 "memory_domains": [ 00:21:27.176 { 00:21:27.176 "dma_device_id": "system", 00:21:27.176 "dma_device_type": 1 00:21:27.176 }, 00:21:27.176 { 00:21:27.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:27.176 "dma_device_type": 2 00:21:27.176 } 00:21:27.176 ], 00:21:27.176 "driver_specific": {} 00:21:27.176 } 00:21:27.176 ] 00:21:27.176 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.176 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:27.176 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:27.176 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:27.176 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:27.176 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:27.176 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:27.176 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:27.176 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:27.176 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:27.176 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:27.176 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:27.176 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:27.176 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:27.176 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.176 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:27.176 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.176 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.176 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.176 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:27.176 "name": "Existed_Raid", 00:21:27.176 "uuid": "90764c2a-59b5-4bbc-9ae2-7d633844a673", 00:21:27.176 "strip_size_kb": 64, 00:21:27.176 "state": "configuring", 00:21:27.176 "raid_level": "raid5f", 00:21:27.176 "superblock": true, 00:21:27.176 "num_base_bdevs": 4, 00:21:27.176 "num_base_bdevs_discovered": 2, 00:21:27.176 "num_base_bdevs_operational": 4, 00:21:27.176 "base_bdevs_list": [ 00:21:27.176 { 00:21:27.176 "name": "BaseBdev1", 00:21:27.176 "uuid": "b71f2234-3ee3-4537-a992-062894b40c46", 00:21:27.176 "is_configured": true, 00:21:27.176 "data_offset": 2048, 00:21:27.176 "data_size": 63488 00:21:27.176 }, 00:21:27.176 { 00:21:27.176 "name": "BaseBdev2", 00:21:27.176 "uuid": "0604c520-87e4-4c0c-8c4a-610f010e4fc9", 00:21:27.176 "is_configured": true, 00:21:27.176 "data_offset": 2048, 00:21:27.176 "data_size": 63488 00:21:27.176 }, 00:21:27.176 { 00:21:27.176 "name": "BaseBdev3", 00:21:27.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.176 "is_configured": false, 00:21:27.176 "data_offset": 0, 00:21:27.176 "data_size": 0 00:21:27.176 }, 00:21:27.176 { 00:21:27.176 "name": "BaseBdev4", 00:21:27.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.176 "is_configured": false, 00:21:27.176 "data_offset": 0, 00:21:27.176 "data_size": 0 00:21:27.176 } 00:21:27.176 ] 00:21:27.176 }' 00:21:27.176 06:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:27.176 06:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.743 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:27.743 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.743 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.743 BaseBdev3 00:21:27.743 [2024-12-06 06:47:46.256868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:27.743 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.743 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:27.743 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:27.743 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:27.743 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:27.743 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:27.743 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:27.743 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:27.743 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.743 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.743 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.743 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:27.743 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.743 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.743 [ 00:21:27.743 { 00:21:27.743 "name": "BaseBdev3", 00:21:27.743 "aliases": [ 00:21:27.743 "4a6d2922-1d82-4122-a9e5-41ae5bf9a961" 00:21:27.743 ], 00:21:27.743 "product_name": "Malloc disk", 00:21:27.743 "block_size": 512, 00:21:27.743 "num_blocks": 65536, 00:21:27.743 "uuid": "4a6d2922-1d82-4122-a9e5-41ae5bf9a961", 00:21:27.743 "assigned_rate_limits": { 00:21:27.743 "rw_ios_per_sec": 0, 00:21:27.743 "rw_mbytes_per_sec": 0, 00:21:27.743 "r_mbytes_per_sec": 0, 00:21:27.743 "w_mbytes_per_sec": 0 00:21:27.743 }, 00:21:27.743 "claimed": true, 00:21:27.743 "claim_type": "exclusive_write", 00:21:27.743 "zoned": false, 00:21:27.743 "supported_io_types": { 00:21:27.743 "read": true, 00:21:27.743 "write": true, 00:21:27.743 "unmap": true, 00:21:27.743 "flush": true, 00:21:27.743 "reset": true, 00:21:27.743 "nvme_admin": false, 00:21:27.743 "nvme_io": false, 00:21:27.743 "nvme_io_md": false, 00:21:27.743 "write_zeroes": true, 00:21:27.743 "zcopy": true, 00:21:27.743 "get_zone_info": false, 00:21:27.743 "zone_management": false, 00:21:27.743 "zone_append": false, 00:21:27.743 "compare": false, 00:21:27.743 "compare_and_write": false, 00:21:27.743 "abort": true, 00:21:27.743 "seek_hole": false, 00:21:27.743 "seek_data": false, 00:21:27.743 "copy": true, 00:21:27.743 "nvme_iov_md": false 00:21:27.743 }, 00:21:27.743 "memory_domains": [ 00:21:27.743 { 00:21:27.743 "dma_device_id": "system", 00:21:27.743 "dma_device_type": 1 00:21:27.743 }, 00:21:27.743 { 00:21:27.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:27.743 "dma_device_type": 2 00:21:27.743 } 00:21:27.743 ], 00:21:27.743 "driver_specific": {} 00:21:27.744 } 00:21:27.744 ] 00:21:27.744 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.744 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:27.744 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:27.744 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:27.744 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:27.744 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:27.744 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:27.744 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:27.744 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:27.744 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:27.744 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:27.744 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:27.744 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:27.744 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:27.744 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.744 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.744 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.744 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:27.744 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.744 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:27.744 "name": "Existed_Raid", 00:21:27.744 "uuid": "90764c2a-59b5-4bbc-9ae2-7d633844a673", 00:21:27.744 "strip_size_kb": 64, 00:21:27.744 "state": "configuring", 00:21:27.744 "raid_level": "raid5f", 00:21:27.744 "superblock": true, 00:21:27.744 "num_base_bdevs": 4, 00:21:27.744 "num_base_bdevs_discovered": 3, 00:21:27.744 "num_base_bdevs_operational": 4, 00:21:27.744 "base_bdevs_list": [ 00:21:27.744 { 00:21:27.744 "name": "BaseBdev1", 00:21:27.744 "uuid": "b71f2234-3ee3-4537-a992-062894b40c46", 00:21:27.744 "is_configured": true, 00:21:27.744 "data_offset": 2048, 00:21:27.744 "data_size": 63488 00:21:27.744 }, 00:21:27.744 { 00:21:27.744 "name": "BaseBdev2", 00:21:27.744 "uuid": "0604c520-87e4-4c0c-8c4a-610f010e4fc9", 00:21:27.744 "is_configured": true, 00:21:27.744 "data_offset": 2048, 00:21:27.744 "data_size": 63488 00:21:27.744 }, 00:21:27.744 { 00:21:27.744 "name": "BaseBdev3", 00:21:27.744 "uuid": "4a6d2922-1d82-4122-a9e5-41ae5bf9a961", 00:21:27.744 "is_configured": true, 00:21:27.744 "data_offset": 2048, 00:21:27.744 "data_size": 63488 00:21:27.744 }, 00:21:27.744 { 00:21:27.744 "name": "BaseBdev4", 00:21:27.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.744 "is_configured": false, 00:21:27.744 "data_offset": 0, 00:21:27.744 "data_size": 0 00:21:27.744 } 00:21:27.744 ] 00:21:27.744 }' 00:21:27.744 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:27.744 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.357 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:28.357 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.357 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.357 [2024-12-06 06:47:46.878809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:28.357 BaseBdev4 00:21:28.357 [2024-12-06 06:47:46.879512] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:28.357 [2024-12-06 06:47:46.879555] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:28.357 [2024-12-06 06:47:46.879937] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:28.357 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.357 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:21:28.357 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:21:28.357 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:28.357 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:28.357 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:28.357 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:28.357 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:28.357 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.358 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.358 [2024-12-06 06:47:46.887504] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:28.358 [2024-12-06 06:47:46.887692] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:28.358 [2024-12-06 06:47:46.888174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:28.358 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.358 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:28.358 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.358 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.358 [ 00:21:28.358 { 00:21:28.358 "name": "BaseBdev4", 00:21:28.358 "aliases": [ 00:21:28.358 "9af221a7-0f78-4a75-ab96-80be6659769e" 00:21:28.358 ], 00:21:28.358 "product_name": "Malloc disk", 00:21:28.358 "block_size": 512, 00:21:28.358 "num_blocks": 65536, 00:21:28.358 "uuid": "9af221a7-0f78-4a75-ab96-80be6659769e", 00:21:28.358 "assigned_rate_limits": { 00:21:28.358 "rw_ios_per_sec": 0, 00:21:28.358 "rw_mbytes_per_sec": 0, 00:21:28.358 "r_mbytes_per_sec": 0, 00:21:28.358 "w_mbytes_per_sec": 0 00:21:28.358 }, 00:21:28.358 "claimed": true, 00:21:28.358 "claim_type": "exclusive_write", 00:21:28.358 "zoned": false, 00:21:28.358 "supported_io_types": { 00:21:28.358 "read": true, 00:21:28.358 "write": true, 00:21:28.358 "unmap": true, 00:21:28.358 "flush": true, 00:21:28.358 "reset": true, 00:21:28.358 "nvme_admin": false, 00:21:28.358 "nvme_io": false, 00:21:28.358 "nvme_io_md": false, 00:21:28.358 "write_zeroes": true, 00:21:28.358 "zcopy": true, 00:21:28.358 "get_zone_info": false, 00:21:28.358 "zone_management": false, 00:21:28.358 "zone_append": false, 00:21:28.358 "compare": false, 00:21:28.358 "compare_and_write": false, 00:21:28.358 "abort": true, 00:21:28.358 "seek_hole": false, 00:21:28.358 "seek_data": false, 00:21:28.358 "copy": true, 00:21:28.358 "nvme_iov_md": false 00:21:28.358 }, 00:21:28.358 "memory_domains": [ 00:21:28.358 { 00:21:28.358 "dma_device_id": "system", 00:21:28.358 "dma_device_type": 1 00:21:28.358 }, 00:21:28.358 { 00:21:28.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:28.358 "dma_device_type": 2 00:21:28.358 } 00:21:28.358 ], 00:21:28.358 "driver_specific": {} 00:21:28.358 } 00:21:28.358 ] 00:21:28.358 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.358 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:28.358 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:28.358 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:28.358 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:21:28.358 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:28.358 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:28.358 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:28.358 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:28.358 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:28.358 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:28.358 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:28.358 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:28.358 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:28.358 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:28.358 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.358 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.358 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.358 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.655 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:28.655 "name": "Existed_Raid", 00:21:28.655 "uuid": "90764c2a-59b5-4bbc-9ae2-7d633844a673", 00:21:28.655 "strip_size_kb": 64, 00:21:28.655 "state": "online", 00:21:28.655 "raid_level": "raid5f", 00:21:28.655 "superblock": true, 00:21:28.655 "num_base_bdevs": 4, 00:21:28.655 "num_base_bdevs_discovered": 4, 00:21:28.655 "num_base_bdevs_operational": 4, 00:21:28.655 "base_bdevs_list": [ 00:21:28.655 { 00:21:28.655 "name": "BaseBdev1", 00:21:28.655 "uuid": "b71f2234-3ee3-4537-a992-062894b40c46", 00:21:28.655 "is_configured": true, 00:21:28.655 "data_offset": 2048, 00:21:28.655 "data_size": 63488 00:21:28.655 }, 00:21:28.655 { 00:21:28.655 "name": "BaseBdev2", 00:21:28.655 "uuid": "0604c520-87e4-4c0c-8c4a-610f010e4fc9", 00:21:28.655 "is_configured": true, 00:21:28.655 "data_offset": 2048, 00:21:28.655 "data_size": 63488 00:21:28.655 }, 00:21:28.655 { 00:21:28.655 "name": "BaseBdev3", 00:21:28.655 "uuid": "4a6d2922-1d82-4122-a9e5-41ae5bf9a961", 00:21:28.655 "is_configured": true, 00:21:28.655 "data_offset": 2048, 00:21:28.655 "data_size": 63488 00:21:28.655 }, 00:21:28.655 { 00:21:28.655 "name": "BaseBdev4", 00:21:28.655 "uuid": "9af221a7-0f78-4a75-ab96-80be6659769e", 00:21:28.655 "is_configured": true, 00:21:28.655 "data_offset": 2048, 00:21:28.655 "data_size": 63488 00:21:28.655 } 00:21:28.655 ] 00:21:28.655 }' 00:21:28.655 06:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:28.655 06:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.914 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:28.914 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:28.914 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:28.914 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:28.914 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:28.914 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:28.914 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:28.914 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:28.914 06:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.914 06:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.914 [2024-12-06 06:47:47.464776] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:28.914 06:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.914 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:28.914 "name": "Existed_Raid", 00:21:28.914 "aliases": [ 00:21:28.914 "90764c2a-59b5-4bbc-9ae2-7d633844a673" 00:21:28.914 ], 00:21:28.914 "product_name": "Raid Volume", 00:21:28.914 "block_size": 512, 00:21:28.914 "num_blocks": 190464, 00:21:28.914 "uuid": "90764c2a-59b5-4bbc-9ae2-7d633844a673", 00:21:28.914 "assigned_rate_limits": { 00:21:28.914 "rw_ios_per_sec": 0, 00:21:28.914 "rw_mbytes_per_sec": 0, 00:21:28.914 "r_mbytes_per_sec": 0, 00:21:28.914 "w_mbytes_per_sec": 0 00:21:28.914 }, 00:21:28.914 "claimed": false, 00:21:28.914 "zoned": false, 00:21:28.914 "supported_io_types": { 00:21:28.914 "read": true, 00:21:28.914 "write": true, 00:21:28.914 "unmap": false, 00:21:28.914 "flush": false, 00:21:28.914 "reset": true, 00:21:28.914 "nvme_admin": false, 00:21:28.914 "nvme_io": false, 00:21:28.914 "nvme_io_md": false, 00:21:28.914 "write_zeroes": true, 00:21:28.914 "zcopy": false, 00:21:28.914 "get_zone_info": false, 00:21:28.914 "zone_management": false, 00:21:28.914 "zone_append": false, 00:21:28.914 "compare": false, 00:21:28.914 "compare_and_write": false, 00:21:28.914 "abort": false, 00:21:28.914 "seek_hole": false, 00:21:28.914 "seek_data": false, 00:21:28.914 "copy": false, 00:21:28.914 "nvme_iov_md": false 00:21:28.914 }, 00:21:28.914 "driver_specific": { 00:21:28.914 "raid": { 00:21:28.914 "uuid": "90764c2a-59b5-4bbc-9ae2-7d633844a673", 00:21:28.914 "strip_size_kb": 64, 00:21:28.914 "state": "online", 00:21:28.914 "raid_level": "raid5f", 00:21:28.914 "superblock": true, 00:21:28.914 "num_base_bdevs": 4, 00:21:28.914 "num_base_bdevs_discovered": 4, 00:21:28.914 "num_base_bdevs_operational": 4, 00:21:28.914 "base_bdevs_list": [ 00:21:28.914 { 00:21:28.914 "name": "BaseBdev1", 00:21:28.914 "uuid": "b71f2234-3ee3-4537-a992-062894b40c46", 00:21:28.914 "is_configured": true, 00:21:28.914 "data_offset": 2048, 00:21:28.914 "data_size": 63488 00:21:28.914 }, 00:21:28.914 { 00:21:28.914 "name": "BaseBdev2", 00:21:28.914 "uuid": "0604c520-87e4-4c0c-8c4a-610f010e4fc9", 00:21:28.914 "is_configured": true, 00:21:28.914 "data_offset": 2048, 00:21:28.914 "data_size": 63488 00:21:28.914 }, 00:21:28.914 { 00:21:28.914 "name": "BaseBdev3", 00:21:28.914 "uuid": "4a6d2922-1d82-4122-a9e5-41ae5bf9a961", 00:21:28.914 "is_configured": true, 00:21:28.914 "data_offset": 2048, 00:21:28.914 "data_size": 63488 00:21:28.914 }, 00:21:28.914 { 00:21:28.914 "name": "BaseBdev4", 00:21:28.914 "uuid": "9af221a7-0f78-4a75-ab96-80be6659769e", 00:21:28.914 "is_configured": true, 00:21:28.914 "data_offset": 2048, 00:21:28.914 "data_size": 63488 00:21:28.914 } 00:21:28.914 ] 00:21:28.914 } 00:21:28.914 } 00:21:28.914 }' 00:21:28.914 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:29.174 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:29.174 BaseBdev2 00:21:29.174 BaseBdev3 00:21:29.174 BaseBdev4' 00:21:29.174 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:29.174 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:29.174 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:29.174 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:29.174 06:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.174 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:29.174 06:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.174 06:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.174 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:29.174 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:29.174 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:29.174 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:29.174 06:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.174 06:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.174 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:29.174 06:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.174 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:29.174 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:29.174 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:29.174 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:29.174 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:29.174 06:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.174 06:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.174 06:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.174 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:29.175 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:29.175 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:29.175 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:29.175 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:29.175 06:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.175 06:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.434 06:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.434 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:29.434 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:29.434 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:29.434 06:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.434 06:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.434 [2024-12-06 06:47:47.868695] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:29.434 06:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.434 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:29.434 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:21:29.434 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:29.434 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:21:29.434 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:29.434 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:29.434 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:29.434 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:29.434 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:29.434 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:29.434 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:29.434 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:29.434 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:29.434 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:29.434 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:29.434 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.434 06:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:29.434 06:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.434 06:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.434 06:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.434 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:29.434 "name": "Existed_Raid", 00:21:29.434 "uuid": "90764c2a-59b5-4bbc-9ae2-7d633844a673", 00:21:29.434 "strip_size_kb": 64, 00:21:29.434 "state": "online", 00:21:29.434 "raid_level": "raid5f", 00:21:29.434 "superblock": true, 00:21:29.434 "num_base_bdevs": 4, 00:21:29.434 "num_base_bdevs_discovered": 3, 00:21:29.434 "num_base_bdevs_operational": 3, 00:21:29.434 "base_bdevs_list": [ 00:21:29.434 { 00:21:29.434 "name": null, 00:21:29.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.434 "is_configured": false, 00:21:29.434 "data_offset": 0, 00:21:29.434 "data_size": 63488 00:21:29.434 }, 00:21:29.434 { 00:21:29.434 "name": "BaseBdev2", 00:21:29.434 "uuid": "0604c520-87e4-4c0c-8c4a-610f010e4fc9", 00:21:29.434 "is_configured": true, 00:21:29.434 "data_offset": 2048, 00:21:29.434 "data_size": 63488 00:21:29.434 }, 00:21:29.434 { 00:21:29.434 "name": "BaseBdev3", 00:21:29.434 "uuid": "4a6d2922-1d82-4122-a9e5-41ae5bf9a961", 00:21:29.434 "is_configured": true, 00:21:29.434 "data_offset": 2048, 00:21:29.434 "data_size": 63488 00:21:29.434 }, 00:21:29.434 { 00:21:29.434 "name": "BaseBdev4", 00:21:29.434 "uuid": "9af221a7-0f78-4a75-ab96-80be6659769e", 00:21:29.434 "is_configured": true, 00:21:29.434 "data_offset": 2048, 00:21:29.434 "data_size": 63488 00:21:29.434 } 00:21:29.434 ] 00:21:29.434 }' 00:21:29.434 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:29.434 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.001 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:30.001 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:30.001 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:30.001 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.001 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.001 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.001 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.001 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:30.001 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:30.001 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:30.001 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.001 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.001 [2024-12-06 06:47:48.513605] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:30.001 [2024-12-06 06:47:48.513977] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:30.001 [2024-12-06 06:47:48.602979] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:30.001 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.002 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:30.002 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:30.002 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.002 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:30.002 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.002 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.002 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.261 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:30.261 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:30.261 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:30.261 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.261 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.261 [2024-12-06 06:47:48.659026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:30.261 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.261 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:30.261 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:30.261 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.261 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:30.261 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.261 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.261 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.261 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:30.261 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:30.261 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:21:30.261 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.261 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.261 [2024-12-06 06:47:48.805239] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:30.261 [2024-12-06 06:47:48.805429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:30.261 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.261 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:30.261 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:30.261 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.261 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.261 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:30.261 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.521 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.521 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:30.521 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:30.521 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:21:30.521 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:30.521 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:30.521 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:30.521 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.521 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.521 BaseBdev2 00:21:30.521 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.521 06:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:30.521 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:30.521 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:30.521 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:30.521 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:30.521 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:30.521 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:30.521 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.521 06:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.521 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.521 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:30.521 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.521 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.521 [ 00:21:30.521 { 00:21:30.521 "name": "BaseBdev2", 00:21:30.521 "aliases": [ 00:21:30.521 "5de03b41-da0d-4c1a-a61b-0f25ef5a25b8" 00:21:30.521 ], 00:21:30.521 "product_name": "Malloc disk", 00:21:30.521 "block_size": 512, 00:21:30.521 "num_blocks": 65536, 00:21:30.521 "uuid": "5de03b41-da0d-4c1a-a61b-0f25ef5a25b8", 00:21:30.521 "assigned_rate_limits": { 00:21:30.521 "rw_ios_per_sec": 0, 00:21:30.521 "rw_mbytes_per_sec": 0, 00:21:30.521 "r_mbytes_per_sec": 0, 00:21:30.521 "w_mbytes_per_sec": 0 00:21:30.521 }, 00:21:30.521 "claimed": false, 00:21:30.521 "zoned": false, 00:21:30.521 "supported_io_types": { 00:21:30.521 "read": true, 00:21:30.521 "write": true, 00:21:30.521 "unmap": true, 00:21:30.521 "flush": true, 00:21:30.521 "reset": true, 00:21:30.521 "nvme_admin": false, 00:21:30.521 "nvme_io": false, 00:21:30.521 "nvme_io_md": false, 00:21:30.521 "write_zeroes": true, 00:21:30.521 "zcopy": true, 00:21:30.521 "get_zone_info": false, 00:21:30.521 "zone_management": false, 00:21:30.521 "zone_append": false, 00:21:30.521 "compare": false, 00:21:30.521 "compare_and_write": false, 00:21:30.521 "abort": true, 00:21:30.521 "seek_hole": false, 00:21:30.521 "seek_data": false, 00:21:30.521 "copy": true, 00:21:30.521 "nvme_iov_md": false 00:21:30.521 }, 00:21:30.521 "memory_domains": [ 00:21:30.521 { 00:21:30.521 "dma_device_id": "system", 00:21:30.521 "dma_device_type": 1 00:21:30.521 }, 00:21:30.521 { 00:21:30.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.521 "dma_device_type": 2 00:21:30.521 } 00:21:30.521 ], 00:21:30.521 "driver_specific": {} 00:21:30.521 } 00:21:30.521 ] 00:21:30.521 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.521 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:30.521 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:30.521 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:30.521 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:30.521 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.521 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.521 BaseBdev3 00:21:30.521 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.521 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:30.521 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:30.521 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:30.521 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:30.521 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:30.521 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:30.521 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:30.521 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.521 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.522 [ 00:21:30.522 { 00:21:30.522 "name": "BaseBdev3", 00:21:30.522 "aliases": [ 00:21:30.522 "223fc264-3075-4b3b-a271-abd7a168d1f2" 00:21:30.522 ], 00:21:30.522 "product_name": "Malloc disk", 00:21:30.522 "block_size": 512, 00:21:30.522 "num_blocks": 65536, 00:21:30.522 "uuid": "223fc264-3075-4b3b-a271-abd7a168d1f2", 00:21:30.522 "assigned_rate_limits": { 00:21:30.522 "rw_ios_per_sec": 0, 00:21:30.522 "rw_mbytes_per_sec": 0, 00:21:30.522 "r_mbytes_per_sec": 0, 00:21:30.522 "w_mbytes_per_sec": 0 00:21:30.522 }, 00:21:30.522 "claimed": false, 00:21:30.522 "zoned": false, 00:21:30.522 "supported_io_types": { 00:21:30.522 "read": true, 00:21:30.522 "write": true, 00:21:30.522 "unmap": true, 00:21:30.522 "flush": true, 00:21:30.522 "reset": true, 00:21:30.522 "nvme_admin": false, 00:21:30.522 "nvme_io": false, 00:21:30.522 "nvme_io_md": false, 00:21:30.522 "write_zeroes": true, 00:21:30.522 "zcopy": true, 00:21:30.522 "get_zone_info": false, 00:21:30.522 "zone_management": false, 00:21:30.522 "zone_append": false, 00:21:30.522 "compare": false, 00:21:30.522 "compare_and_write": false, 00:21:30.522 "abort": true, 00:21:30.522 "seek_hole": false, 00:21:30.522 "seek_data": false, 00:21:30.522 "copy": true, 00:21:30.522 "nvme_iov_md": false 00:21:30.522 }, 00:21:30.522 "memory_domains": [ 00:21:30.522 { 00:21:30.522 "dma_device_id": "system", 00:21:30.522 "dma_device_type": 1 00:21:30.522 }, 00:21:30.522 { 00:21:30.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.522 "dma_device_type": 2 00:21:30.522 } 00:21:30.522 ], 00:21:30.522 "driver_specific": {} 00:21:30.522 } 00:21:30.522 ] 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.522 BaseBdev4 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.522 [ 00:21:30.522 { 00:21:30.522 "name": "BaseBdev4", 00:21:30.522 "aliases": [ 00:21:30.522 "9789368c-3974-450d-8594-8da95d18ebfc" 00:21:30.522 ], 00:21:30.522 "product_name": "Malloc disk", 00:21:30.522 "block_size": 512, 00:21:30.522 "num_blocks": 65536, 00:21:30.522 "uuid": "9789368c-3974-450d-8594-8da95d18ebfc", 00:21:30.522 "assigned_rate_limits": { 00:21:30.522 "rw_ios_per_sec": 0, 00:21:30.522 "rw_mbytes_per_sec": 0, 00:21:30.522 "r_mbytes_per_sec": 0, 00:21:30.522 "w_mbytes_per_sec": 0 00:21:30.522 }, 00:21:30.522 "claimed": false, 00:21:30.522 "zoned": false, 00:21:30.522 "supported_io_types": { 00:21:30.522 "read": true, 00:21:30.522 "write": true, 00:21:30.522 "unmap": true, 00:21:30.522 "flush": true, 00:21:30.522 "reset": true, 00:21:30.522 "nvme_admin": false, 00:21:30.522 "nvme_io": false, 00:21:30.522 "nvme_io_md": false, 00:21:30.522 "write_zeroes": true, 00:21:30.522 "zcopy": true, 00:21:30.522 "get_zone_info": false, 00:21:30.522 "zone_management": false, 00:21:30.522 "zone_append": false, 00:21:30.522 "compare": false, 00:21:30.522 "compare_and_write": false, 00:21:30.522 "abort": true, 00:21:30.522 "seek_hole": false, 00:21:30.522 "seek_data": false, 00:21:30.522 "copy": true, 00:21:30.522 "nvme_iov_md": false 00:21:30.522 }, 00:21:30.522 "memory_domains": [ 00:21:30.522 { 00:21:30.522 "dma_device_id": "system", 00:21:30.522 "dma_device_type": 1 00:21:30.522 }, 00:21:30.522 { 00:21:30.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.522 "dma_device_type": 2 00:21:30.522 } 00:21:30.522 ], 00:21:30.522 "driver_specific": {} 00:21:30.522 } 00:21:30.522 ] 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.522 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.782 [2024-12-06 06:47:49.167897] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:30.782 [2024-12-06 06:47:49.168639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:30.782 [2024-12-06 06:47:49.168803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:30.782 [2024-12-06 06:47:49.171341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:30.782 [2024-12-06 06:47:49.171560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:30.782 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.782 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:30.782 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:30.782 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:30.782 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:30.782 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:30.782 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:30.782 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.782 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.782 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.782 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.782 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.782 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:30.782 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.782 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.782 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.782 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.782 "name": "Existed_Raid", 00:21:30.782 "uuid": "9b6102a0-2b0f-4bce-bba0-bfb0f7bb1c0b", 00:21:30.782 "strip_size_kb": 64, 00:21:30.782 "state": "configuring", 00:21:30.782 "raid_level": "raid5f", 00:21:30.782 "superblock": true, 00:21:30.782 "num_base_bdevs": 4, 00:21:30.782 "num_base_bdevs_discovered": 3, 00:21:30.782 "num_base_bdevs_operational": 4, 00:21:30.782 "base_bdevs_list": [ 00:21:30.782 { 00:21:30.782 "name": "BaseBdev1", 00:21:30.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.782 "is_configured": false, 00:21:30.782 "data_offset": 0, 00:21:30.782 "data_size": 0 00:21:30.782 }, 00:21:30.782 { 00:21:30.782 "name": "BaseBdev2", 00:21:30.782 "uuid": "5de03b41-da0d-4c1a-a61b-0f25ef5a25b8", 00:21:30.782 "is_configured": true, 00:21:30.782 "data_offset": 2048, 00:21:30.782 "data_size": 63488 00:21:30.782 }, 00:21:30.782 { 00:21:30.782 "name": "BaseBdev3", 00:21:30.782 "uuid": "223fc264-3075-4b3b-a271-abd7a168d1f2", 00:21:30.782 "is_configured": true, 00:21:30.782 "data_offset": 2048, 00:21:30.782 "data_size": 63488 00:21:30.782 }, 00:21:30.782 { 00:21:30.782 "name": "BaseBdev4", 00:21:30.782 "uuid": "9789368c-3974-450d-8594-8da95d18ebfc", 00:21:30.782 "is_configured": true, 00:21:30.782 "data_offset": 2048, 00:21:30.782 "data_size": 63488 00:21:30.782 } 00:21:30.782 ] 00:21:30.782 }' 00:21:30.782 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.782 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.042 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:31.042 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.042 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.302 [2024-12-06 06:47:49.692128] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:31.302 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.302 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:31.302 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:31.302 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:31.302 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:31.302 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:31.302 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:31.302 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:31.302 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:31.302 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:31.302 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:31.302 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.302 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.302 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:31.302 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.302 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.302 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:31.302 "name": "Existed_Raid", 00:21:31.302 "uuid": "9b6102a0-2b0f-4bce-bba0-bfb0f7bb1c0b", 00:21:31.302 "strip_size_kb": 64, 00:21:31.302 "state": "configuring", 00:21:31.302 "raid_level": "raid5f", 00:21:31.302 "superblock": true, 00:21:31.302 "num_base_bdevs": 4, 00:21:31.302 "num_base_bdevs_discovered": 2, 00:21:31.302 "num_base_bdevs_operational": 4, 00:21:31.302 "base_bdevs_list": [ 00:21:31.302 { 00:21:31.302 "name": "BaseBdev1", 00:21:31.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.302 "is_configured": false, 00:21:31.302 "data_offset": 0, 00:21:31.302 "data_size": 0 00:21:31.302 }, 00:21:31.302 { 00:21:31.302 "name": null, 00:21:31.302 "uuid": "5de03b41-da0d-4c1a-a61b-0f25ef5a25b8", 00:21:31.302 "is_configured": false, 00:21:31.302 "data_offset": 0, 00:21:31.302 "data_size": 63488 00:21:31.302 }, 00:21:31.302 { 00:21:31.302 "name": "BaseBdev3", 00:21:31.302 "uuid": "223fc264-3075-4b3b-a271-abd7a168d1f2", 00:21:31.302 "is_configured": true, 00:21:31.302 "data_offset": 2048, 00:21:31.302 "data_size": 63488 00:21:31.302 }, 00:21:31.302 { 00:21:31.302 "name": "BaseBdev4", 00:21:31.302 "uuid": "9789368c-3974-450d-8594-8da95d18ebfc", 00:21:31.302 "is_configured": true, 00:21:31.302 "data_offset": 2048, 00:21:31.302 "data_size": 63488 00:21:31.302 } 00:21:31.302 ] 00:21:31.302 }' 00:21:31.302 06:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:31.302 06:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.561 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.561 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:31.561 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.561 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.561 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.826 [2024-12-06 06:47:50.266337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:31.826 BaseBdev1 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.826 [ 00:21:31.826 { 00:21:31.826 "name": "BaseBdev1", 00:21:31.826 "aliases": [ 00:21:31.826 "0eed4498-bcde-4f55-b7c1-055dbbfc9aca" 00:21:31.826 ], 00:21:31.826 "product_name": "Malloc disk", 00:21:31.826 "block_size": 512, 00:21:31.826 "num_blocks": 65536, 00:21:31.826 "uuid": "0eed4498-bcde-4f55-b7c1-055dbbfc9aca", 00:21:31.826 "assigned_rate_limits": { 00:21:31.826 "rw_ios_per_sec": 0, 00:21:31.826 "rw_mbytes_per_sec": 0, 00:21:31.826 "r_mbytes_per_sec": 0, 00:21:31.826 "w_mbytes_per_sec": 0 00:21:31.826 }, 00:21:31.826 "claimed": true, 00:21:31.826 "claim_type": "exclusive_write", 00:21:31.826 "zoned": false, 00:21:31.826 "supported_io_types": { 00:21:31.826 "read": true, 00:21:31.826 "write": true, 00:21:31.826 "unmap": true, 00:21:31.826 "flush": true, 00:21:31.826 "reset": true, 00:21:31.826 "nvme_admin": false, 00:21:31.826 "nvme_io": false, 00:21:31.826 "nvme_io_md": false, 00:21:31.826 "write_zeroes": true, 00:21:31.826 "zcopy": true, 00:21:31.826 "get_zone_info": false, 00:21:31.826 "zone_management": false, 00:21:31.826 "zone_append": false, 00:21:31.826 "compare": false, 00:21:31.826 "compare_and_write": false, 00:21:31.826 "abort": true, 00:21:31.826 "seek_hole": false, 00:21:31.826 "seek_data": false, 00:21:31.826 "copy": true, 00:21:31.826 "nvme_iov_md": false 00:21:31.826 }, 00:21:31.826 "memory_domains": [ 00:21:31.826 { 00:21:31.826 "dma_device_id": "system", 00:21:31.826 "dma_device_type": 1 00:21:31.826 }, 00:21:31.826 { 00:21:31.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.826 "dma_device_type": 2 00:21:31.826 } 00:21:31.826 ], 00:21:31.826 "driver_specific": {} 00:21:31.826 } 00:21:31.826 ] 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:31.826 "name": "Existed_Raid", 00:21:31.826 "uuid": "9b6102a0-2b0f-4bce-bba0-bfb0f7bb1c0b", 00:21:31.826 "strip_size_kb": 64, 00:21:31.826 "state": "configuring", 00:21:31.826 "raid_level": "raid5f", 00:21:31.826 "superblock": true, 00:21:31.826 "num_base_bdevs": 4, 00:21:31.826 "num_base_bdevs_discovered": 3, 00:21:31.826 "num_base_bdevs_operational": 4, 00:21:31.826 "base_bdevs_list": [ 00:21:31.826 { 00:21:31.826 "name": "BaseBdev1", 00:21:31.826 "uuid": "0eed4498-bcde-4f55-b7c1-055dbbfc9aca", 00:21:31.826 "is_configured": true, 00:21:31.826 "data_offset": 2048, 00:21:31.826 "data_size": 63488 00:21:31.826 }, 00:21:31.826 { 00:21:31.826 "name": null, 00:21:31.826 "uuid": "5de03b41-da0d-4c1a-a61b-0f25ef5a25b8", 00:21:31.826 "is_configured": false, 00:21:31.826 "data_offset": 0, 00:21:31.826 "data_size": 63488 00:21:31.826 }, 00:21:31.826 { 00:21:31.826 "name": "BaseBdev3", 00:21:31.826 "uuid": "223fc264-3075-4b3b-a271-abd7a168d1f2", 00:21:31.826 "is_configured": true, 00:21:31.826 "data_offset": 2048, 00:21:31.826 "data_size": 63488 00:21:31.826 }, 00:21:31.826 { 00:21:31.826 "name": "BaseBdev4", 00:21:31.826 "uuid": "9789368c-3974-450d-8594-8da95d18ebfc", 00:21:31.826 "is_configured": true, 00:21:31.826 "data_offset": 2048, 00:21:31.826 "data_size": 63488 00:21:31.826 } 00:21:31.826 ] 00:21:31.826 }' 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:31.826 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.393 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.393 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.393 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.393 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:32.393 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.393 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:21:32.393 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:21:32.393 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.393 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.393 [2024-12-06 06:47:50.894634] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:32.393 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.393 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:32.393 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:32.393 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:32.393 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:32.394 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:32.394 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:32.394 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:32.394 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:32.394 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:32.394 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:32.394 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.394 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:32.394 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.394 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.394 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.394 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:32.394 "name": "Existed_Raid", 00:21:32.394 "uuid": "9b6102a0-2b0f-4bce-bba0-bfb0f7bb1c0b", 00:21:32.394 "strip_size_kb": 64, 00:21:32.394 "state": "configuring", 00:21:32.394 "raid_level": "raid5f", 00:21:32.394 "superblock": true, 00:21:32.394 "num_base_bdevs": 4, 00:21:32.394 "num_base_bdevs_discovered": 2, 00:21:32.394 "num_base_bdevs_operational": 4, 00:21:32.394 "base_bdevs_list": [ 00:21:32.394 { 00:21:32.394 "name": "BaseBdev1", 00:21:32.394 "uuid": "0eed4498-bcde-4f55-b7c1-055dbbfc9aca", 00:21:32.394 "is_configured": true, 00:21:32.394 "data_offset": 2048, 00:21:32.394 "data_size": 63488 00:21:32.394 }, 00:21:32.394 { 00:21:32.394 "name": null, 00:21:32.394 "uuid": "5de03b41-da0d-4c1a-a61b-0f25ef5a25b8", 00:21:32.394 "is_configured": false, 00:21:32.394 "data_offset": 0, 00:21:32.394 "data_size": 63488 00:21:32.394 }, 00:21:32.394 { 00:21:32.394 "name": null, 00:21:32.394 "uuid": "223fc264-3075-4b3b-a271-abd7a168d1f2", 00:21:32.394 "is_configured": false, 00:21:32.394 "data_offset": 0, 00:21:32.394 "data_size": 63488 00:21:32.394 }, 00:21:32.394 { 00:21:32.394 "name": "BaseBdev4", 00:21:32.394 "uuid": "9789368c-3974-450d-8594-8da95d18ebfc", 00:21:32.394 "is_configured": true, 00:21:32.394 "data_offset": 2048, 00:21:32.394 "data_size": 63488 00:21:32.394 } 00:21:32.394 ] 00:21:32.394 }' 00:21:32.394 06:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:32.394 06:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.961 06:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.961 06:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.961 06:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.961 06:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:32.961 06:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.961 06:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:21:32.961 06:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:32.961 06:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.961 06:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.961 [2024-12-06 06:47:51.482754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:32.961 06:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.961 06:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:32.961 06:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:32.961 06:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:32.961 06:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:32.961 06:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:32.961 06:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:32.961 06:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:32.961 06:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:32.961 06:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:32.961 06:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:32.961 06:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.961 06:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:32.961 06:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.961 06:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.961 06:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.961 06:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:32.961 "name": "Existed_Raid", 00:21:32.961 "uuid": "9b6102a0-2b0f-4bce-bba0-bfb0f7bb1c0b", 00:21:32.961 "strip_size_kb": 64, 00:21:32.961 "state": "configuring", 00:21:32.961 "raid_level": "raid5f", 00:21:32.961 "superblock": true, 00:21:32.961 "num_base_bdevs": 4, 00:21:32.961 "num_base_bdevs_discovered": 3, 00:21:32.961 "num_base_bdevs_operational": 4, 00:21:32.961 "base_bdevs_list": [ 00:21:32.961 { 00:21:32.961 "name": "BaseBdev1", 00:21:32.961 "uuid": "0eed4498-bcde-4f55-b7c1-055dbbfc9aca", 00:21:32.961 "is_configured": true, 00:21:32.961 "data_offset": 2048, 00:21:32.961 "data_size": 63488 00:21:32.961 }, 00:21:32.961 { 00:21:32.961 "name": null, 00:21:32.961 "uuid": "5de03b41-da0d-4c1a-a61b-0f25ef5a25b8", 00:21:32.961 "is_configured": false, 00:21:32.961 "data_offset": 0, 00:21:32.961 "data_size": 63488 00:21:32.961 }, 00:21:32.961 { 00:21:32.961 "name": "BaseBdev3", 00:21:32.961 "uuid": "223fc264-3075-4b3b-a271-abd7a168d1f2", 00:21:32.961 "is_configured": true, 00:21:32.961 "data_offset": 2048, 00:21:32.961 "data_size": 63488 00:21:32.961 }, 00:21:32.961 { 00:21:32.961 "name": "BaseBdev4", 00:21:32.961 "uuid": "9789368c-3974-450d-8594-8da95d18ebfc", 00:21:32.961 "is_configured": true, 00:21:32.961 "data_offset": 2048, 00:21:32.961 "data_size": 63488 00:21:32.961 } 00:21:32.961 ] 00:21:32.961 }' 00:21:32.961 06:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:32.961 06:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.530 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.530 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:33.530 06:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.530 06:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.530 06:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.530 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:21:33.530 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:33.530 06:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.530 06:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.530 [2024-12-06 06:47:52.070984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:33.530 06:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.530 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:33.530 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:33.530 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:33.530 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:33.530 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:33.530 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:33.530 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:33.530 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:33.530 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:33.530 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:33.530 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:33.530 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.530 06:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.530 06:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.789 06:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.789 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:33.789 "name": "Existed_Raid", 00:21:33.789 "uuid": "9b6102a0-2b0f-4bce-bba0-bfb0f7bb1c0b", 00:21:33.789 "strip_size_kb": 64, 00:21:33.789 "state": "configuring", 00:21:33.789 "raid_level": "raid5f", 00:21:33.789 "superblock": true, 00:21:33.789 "num_base_bdevs": 4, 00:21:33.789 "num_base_bdevs_discovered": 2, 00:21:33.789 "num_base_bdevs_operational": 4, 00:21:33.789 "base_bdevs_list": [ 00:21:33.789 { 00:21:33.789 "name": null, 00:21:33.789 "uuid": "0eed4498-bcde-4f55-b7c1-055dbbfc9aca", 00:21:33.789 "is_configured": false, 00:21:33.789 "data_offset": 0, 00:21:33.789 "data_size": 63488 00:21:33.789 }, 00:21:33.789 { 00:21:33.789 "name": null, 00:21:33.789 "uuid": "5de03b41-da0d-4c1a-a61b-0f25ef5a25b8", 00:21:33.789 "is_configured": false, 00:21:33.789 "data_offset": 0, 00:21:33.789 "data_size": 63488 00:21:33.789 }, 00:21:33.789 { 00:21:33.789 "name": "BaseBdev3", 00:21:33.789 "uuid": "223fc264-3075-4b3b-a271-abd7a168d1f2", 00:21:33.789 "is_configured": true, 00:21:33.789 "data_offset": 2048, 00:21:33.789 "data_size": 63488 00:21:33.789 }, 00:21:33.789 { 00:21:33.789 "name": "BaseBdev4", 00:21:33.789 "uuid": "9789368c-3974-450d-8594-8da95d18ebfc", 00:21:33.789 "is_configured": true, 00:21:33.789 "data_offset": 2048, 00:21:33.789 "data_size": 63488 00:21:33.789 } 00:21:33.789 ] 00:21:33.789 }' 00:21:33.789 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:33.789 06:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.048 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.048 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:34.048 06:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.048 06:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.048 06:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.307 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:21:34.307 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:34.307 06:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.307 06:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.307 [2024-12-06 06:47:52.730431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:34.307 06:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.307 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:34.307 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:34.307 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:34.307 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:34.307 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:34.307 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:34.307 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:34.307 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:34.307 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:34.307 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:34.307 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:34.307 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.307 06:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.307 06:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.307 06:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.307 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:34.307 "name": "Existed_Raid", 00:21:34.307 "uuid": "9b6102a0-2b0f-4bce-bba0-bfb0f7bb1c0b", 00:21:34.307 "strip_size_kb": 64, 00:21:34.307 "state": "configuring", 00:21:34.307 "raid_level": "raid5f", 00:21:34.307 "superblock": true, 00:21:34.307 "num_base_bdevs": 4, 00:21:34.307 "num_base_bdevs_discovered": 3, 00:21:34.307 "num_base_bdevs_operational": 4, 00:21:34.307 "base_bdevs_list": [ 00:21:34.307 { 00:21:34.307 "name": null, 00:21:34.307 "uuid": "0eed4498-bcde-4f55-b7c1-055dbbfc9aca", 00:21:34.307 "is_configured": false, 00:21:34.307 "data_offset": 0, 00:21:34.307 "data_size": 63488 00:21:34.307 }, 00:21:34.307 { 00:21:34.307 "name": "BaseBdev2", 00:21:34.307 "uuid": "5de03b41-da0d-4c1a-a61b-0f25ef5a25b8", 00:21:34.307 "is_configured": true, 00:21:34.307 "data_offset": 2048, 00:21:34.307 "data_size": 63488 00:21:34.307 }, 00:21:34.307 { 00:21:34.307 "name": "BaseBdev3", 00:21:34.307 "uuid": "223fc264-3075-4b3b-a271-abd7a168d1f2", 00:21:34.307 "is_configured": true, 00:21:34.307 "data_offset": 2048, 00:21:34.307 "data_size": 63488 00:21:34.307 }, 00:21:34.307 { 00:21:34.307 "name": "BaseBdev4", 00:21:34.307 "uuid": "9789368c-3974-450d-8594-8da95d18ebfc", 00:21:34.307 "is_configured": true, 00:21:34.308 "data_offset": 2048, 00:21:34.308 "data_size": 63488 00:21:34.308 } 00:21:34.308 ] 00:21:34.308 }' 00:21:34.308 06:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:34.308 06:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.876 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.876 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:34.876 06:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.876 06:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.876 06:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.876 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:34.876 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:34.876 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.876 06:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.876 06:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.876 06:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.876 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0eed4498-bcde-4f55-b7c1-055dbbfc9aca 00:21:34.876 06:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.876 06:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.876 [2024-12-06 06:47:53.361390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:34.876 NewBaseBdev 00:21:34.876 [2024-12-06 06:47:53.361934] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:34.876 [2024-12-06 06:47:53.361959] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:34.876 [2024-12-06 06:47:53.362277] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:34.876 06:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.876 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:21:34.876 06:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:21:34.876 06:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:34.876 06:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:34.876 06:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:34.876 06:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:34.876 06:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:34.876 06:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.876 06:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.876 [2024-12-06 06:47:53.368850] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:34.876 [2024-12-06 06:47:53.369006] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:21:34.876 [2024-12-06 06:47:53.369333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:34.876 06:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.876 06:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:34.876 06:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.876 06:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.876 [ 00:21:34.876 { 00:21:34.876 "name": "NewBaseBdev", 00:21:34.876 "aliases": [ 00:21:34.876 "0eed4498-bcde-4f55-b7c1-055dbbfc9aca" 00:21:34.876 ], 00:21:34.876 "product_name": "Malloc disk", 00:21:34.876 "block_size": 512, 00:21:34.876 "num_blocks": 65536, 00:21:34.876 "uuid": "0eed4498-bcde-4f55-b7c1-055dbbfc9aca", 00:21:34.876 "assigned_rate_limits": { 00:21:34.876 "rw_ios_per_sec": 0, 00:21:34.876 "rw_mbytes_per_sec": 0, 00:21:34.876 "r_mbytes_per_sec": 0, 00:21:34.876 "w_mbytes_per_sec": 0 00:21:34.876 }, 00:21:34.876 "claimed": true, 00:21:34.876 "claim_type": "exclusive_write", 00:21:34.876 "zoned": false, 00:21:34.876 "supported_io_types": { 00:21:34.876 "read": true, 00:21:34.876 "write": true, 00:21:34.876 "unmap": true, 00:21:34.876 "flush": true, 00:21:34.876 "reset": true, 00:21:34.876 "nvme_admin": false, 00:21:34.876 "nvme_io": false, 00:21:34.876 "nvme_io_md": false, 00:21:34.876 "write_zeroes": true, 00:21:34.876 "zcopy": true, 00:21:34.876 "get_zone_info": false, 00:21:34.876 "zone_management": false, 00:21:34.876 "zone_append": false, 00:21:34.877 "compare": false, 00:21:34.877 "compare_and_write": false, 00:21:34.877 "abort": true, 00:21:34.877 "seek_hole": false, 00:21:34.877 "seek_data": false, 00:21:34.877 "copy": true, 00:21:34.877 "nvme_iov_md": false 00:21:34.877 }, 00:21:34.877 "memory_domains": [ 00:21:34.877 { 00:21:34.877 "dma_device_id": "system", 00:21:34.877 "dma_device_type": 1 00:21:34.877 }, 00:21:34.877 { 00:21:34.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:34.877 "dma_device_type": 2 00:21:34.877 } 00:21:34.877 ], 00:21:34.877 "driver_specific": {} 00:21:34.877 } 00:21:34.877 ] 00:21:34.877 06:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.877 06:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:34.877 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:21:34.877 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:34.877 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:34.877 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:34.877 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:34.877 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:34.877 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:34.877 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:34.877 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:34.877 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:34.877 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.877 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:34.877 06:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.877 06:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.877 06:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.877 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:34.877 "name": "Existed_Raid", 00:21:34.877 "uuid": "9b6102a0-2b0f-4bce-bba0-bfb0f7bb1c0b", 00:21:34.877 "strip_size_kb": 64, 00:21:34.877 "state": "online", 00:21:34.877 "raid_level": "raid5f", 00:21:34.877 "superblock": true, 00:21:34.877 "num_base_bdevs": 4, 00:21:34.877 "num_base_bdevs_discovered": 4, 00:21:34.877 "num_base_bdevs_operational": 4, 00:21:34.877 "base_bdevs_list": [ 00:21:34.877 { 00:21:34.877 "name": "NewBaseBdev", 00:21:34.877 "uuid": "0eed4498-bcde-4f55-b7c1-055dbbfc9aca", 00:21:34.877 "is_configured": true, 00:21:34.877 "data_offset": 2048, 00:21:34.877 "data_size": 63488 00:21:34.877 }, 00:21:34.877 { 00:21:34.877 "name": "BaseBdev2", 00:21:34.877 "uuid": "5de03b41-da0d-4c1a-a61b-0f25ef5a25b8", 00:21:34.877 "is_configured": true, 00:21:34.877 "data_offset": 2048, 00:21:34.877 "data_size": 63488 00:21:34.877 }, 00:21:34.877 { 00:21:34.877 "name": "BaseBdev3", 00:21:34.877 "uuid": "223fc264-3075-4b3b-a271-abd7a168d1f2", 00:21:34.877 "is_configured": true, 00:21:34.877 "data_offset": 2048, 00:21:34.877 "data_size": 63488 00:21:34.877 }, 00:21:34.877 { 00:21:34.877 "name": "BaseBdev4", 00:21:34.877 "uuid": "9789368c-3974-450d-8594-8da95d18ebfc", 00:21:34.877 "is_configured": true, 00:21:34.877 "data_offset": 2048, 00:21:34.877 "data_size": 63488 00:21:34.877 } 00:21:34.877 ] 00:21:34.877 }' 00:21:34.877 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:34.877 06:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.446 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:21:35.446 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:35.446 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:35.446 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:35.446 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:35.446 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:35.446 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:35.446 06:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.446 06:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.446 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:35.446 [2024-12-06 06:47:53.925423] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:35.446 06:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.446 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:35.446 "name": "Existed_Raid", 00:21:35.446 "aliases": [ 00:21:35.446 "9b6102a0-2b0f-4bce-bba0-bfb0f7bb1c0b" 00:21:35.446 ], 00:21:35.446 "product_name": "Raid Volume", 00:21:35.446 "block_size": 512, 00:21:35.446 "num_blocks": 190464, 00:21:35.446 "uuid": "9b6102a0-2b0f-4bce-bba0-bfb0f7bb1c0b", 00:21:35.446 "assigned_rate_limits": { 00:21:35.446 "rw_ios_per_sec": 0, 00:21:35.446 "rw_mbytes_per_sec": 0, 00:21:35.446 "r_mbytes_per_sec": 0, 00:21:35.446 "w_mbytes_per_sec": 0 00:21:35.446 }, 00:21:35.446 "claimed": false, 00:21:35.446 "zoned": false, 00:21:35.446 "supported_io_types": { 00:21:35.446 "read": true, 00:21:35.446 "write": true, 00:21:35.446 "unmap": false, 00:21:35.446 "flush": false, 00:21:35.446 "reset": true, 00:21:35.446 "nvme_admin": false, 00:21:35.446 "nvme_io": false, 00:21:35.446 "nvme_io_md": false, 00:21:35.446 "write_zeroes": true, 00:21:35.446 "zcopy": false, 00:21:35.446 "get_zone_info": false, 00:21:35.446 "zone_management": false, 00:21:35.446 "zone_append": false, 00:21:35.446 "compare": false, 00:21:35.446 "compare_and_write": false, 00:21:35.446 "abort": false, 00:21:35.446 "seek_hole": false, 00:21:35.446 "seek_data": false, 00:21:35.446 "copy": false, 00:21:35.446 "nvme_iov_md": false 00:21:35.446 }, 00:21:35.446 "driver_specific": { 00:21:35.446 "raid": { 00:21:35.447 "uuid": "9b6102a0-2b0f-4bce-bba0-bfb0f7bb1c0b", 00:21:35.447 "strip_size_kb": 64, 00:21:35.447 "state": "online", 00:21:35.447 "raid_level": "raid5f", 00:21:35.447 "superblock": true, 00:21:35.447 "num_base_bdevs": 4, 00:21:35.447 "num_base_bdevs_discovered": 4, 00:21:35.447 "num_base_bdevs_operational": 4, 00:21:35.447 "base_bdevs_list": [ 00:21:35.447 { 00:21:35.447 "name": "NewBaseBdev", 00:21:35.447 "uuid": "0eed4498-bcde-4f55-b7c1-055dbbfc9aca", 00:21:35.447 "is_configured": true, 00:21:35.447 "data_offset": 2048, 00:21:35.447 "data_size": 63488 00:21:35.447 }, 00:21:35.447 { 00:21:35.447 "name": "BaseBdev2", 00:21:35.447 "uuid": "5de03b41-da0d-4c1a-a61b-0f25ef5a25b8", 00:21:35.447 "is_configured": true, 00:21:35.447 "data_offset": 2048, 00:21:35.447 "data_size": 63488 00:21:35.447 }, 00:21:35.447 { 00:21:35.447 "name": "BaseBdev3", 00:21:35.447 "uuid": "223fc264-3075-4b3b-a271-abd7a168d1f2", 00:21:35.447 "is_configured": true, 00:21:35.447 "data_offset": 2048, 00:21:35.447 "data_size": 63488 00:21:35.447 }, 00:21:35.447 { 00:21:35.447 "name": "BaseBdev4", 00:21:35.447 "uuid": "9789368c-3974-450d-8594-8da95d18ebfc", 00:21:35.447 "is_configured": true, 00:21:35.447 "data_offset": 2048, 00:21:35.447 "data_size": 63488 00:21:35.447 } 00:21:35.447 ] 00:21:35.447 } 00:21:35.447 } 00:21:35.447 }' 00:21:35.447 06:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:35.447 06:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:21:35.447 BaseBdev2 00:21:35.447 BaseBdev3 00:21:35.447 BaseBdev4' 00:21:35.447 06:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:35.447 06:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:35.447 06:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:35.447 06:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:35.447 06:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:21:35.447 06:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.447 06:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.706 06:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.706 06:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:35.706 06:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:35.706 06:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:35.706 06:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:35.706 06:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.706 06:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.706 06:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:35.706 06:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.706 06:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:35.706 06:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:35.706 06:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:35.706 06:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:35.706 06:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.706 06:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.706 06:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:35.706 06:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.706 06:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:35.706 06:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:35.706 06:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:35.706 06:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:35.706 06:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.706 06:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.706 06:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:35.706 06:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.706 06:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:35.706 06:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:35.706 06:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:35.706 06:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.706 06:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.706 [2024-12-06 06:47:54.301218] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:35.707 [2024-12-06 06:47:54.301385] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:35.707 [2024-12-06 06:47:54.301613] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:35.707 [2024-12-06 06:47:54.302117] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:35.707 [2024-12-06 06:47:54.302253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:35.707 06:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.707 06:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84022 00:21:35.707 06:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84022 ']' 00:21:35.707 06:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 84022 00:21:35.707 06:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:21:35.707 06:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:35.707 06:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84022 00:21:35.707 06:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:35.707 killing process with pid 84022 00:21:35.707 06:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:35.707 06:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84022' 00:21:35.707 06:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 84022 00:21:35.707 [2024-12-06 06:47:54.340058] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:35.707 06:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 84022 00:21:36.273 [2024-12-06 06:47:54.695713] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:37.209 06:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:21:37.209 00:21:37.209 real 0m13.061s 00:21:37.209 user 0m21.677s 00:21:37.209 sys 0m1.793s 00:21:37.209 06:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:37.209 06:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:37.209 ************************************ 00:21:37.209 END TEST raid5f_state_function_test_sb 00:21:37.209 ************************************ 00:21:37.209 06:47:55 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:21:37.209 06:47:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:37.209 06:47:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:37.209 06:47:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:37.209 ************************************ 00:21:37.209 START TEST raid5f_superblock_test 00:21:37.209 ************************************ 00:21:37.209 06:47:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:21:37.209 06:47:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:21:37.209 06:47:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:21:37.209 06:47:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:37.209 06:47:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:37.209 06:47:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:37.209 06:47:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:37.209 06:47:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:37.209 06:47:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:37.209 06:47:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:37.209 06:47:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:37.209 06:47:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:37.209 06:47:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:37.209 06:47:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:37.209 06:47:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:21:37.209 06:47:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:21:37.209 06:47:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:21:37.209 06:47:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84704 00:21:37.209 06:47:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84704 00:21:37.209 06:47:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84704 ']' 00:21:37.209 06:47:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:37.209 06:47:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.209 06:47:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:37.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.209 06:47:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.209 06:47:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:37.209 06:47:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.466 [2024-12-06 06:47:55.913620] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:21:37.466 [2024-12-06 06:47:55.913784] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84704 ] 00:21:37.466 [2024-12-06 06:47:56.097356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.723 [2024-12-06 06:47:56.264612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.981 [2024-12-06 06:47:56.469030] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:37.981 [2024-12-06 06:47:56.469085] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:38.547 06:47:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:38.547 06:47:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:21:38.547 06:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:38.547 06:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:38.547 06:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:38.547 06:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:38.547 06:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:38.547 06:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:38.547 06:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:38.547 06:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:38.547 06:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:21:38.547 06:47:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.547 06:47:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.547 malloc1 00:21:38.547 06:47:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.547 06:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:38.547 06:47:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.547 06:47:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.547 [2024-12-06 06:47:56.970347] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:38.547 [2024-12-06 06:47:56.970418] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.547 [2024-12-06 06:47:56.970450] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:38.547 [2024-12-06 06:47:56.970465] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.547 [2024-12-06 06:47:56.973299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.547 [2024-12-06 06:47:56.973342] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:38.547 pt1 00:21:38.547 06:47:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.547 06:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:38.547 06:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:38.547 06:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:38.547 06:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:38.547 06:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:38.547 06:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:38.547 06:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:38.547 06:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:38.547 06:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:21:38.547 06:47:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.547 06:47:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.547 malloc2 00:21:38.547 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.547 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:38.547 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.547 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.547 [2024-12-06 06:47:57.026551] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:38.547 [2024-12-06 06:47:57.026614] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.547 [2024-12-06 06:47:57.026650] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:38.547 [2024-12-06 06:47:57.026666] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.548 [2024-12-06 06:47:57.029452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.548 [2024-12-06 06:47:57.029493] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:38.548 pt2 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.548 malloc3 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.548 [2024-12-06 06:47:57.095053] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:38.548 [2024-12-06 06:47:57.095116] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.548 [2024-12-06 06:47:57.095147] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:38.548 [2024-12-06 06:47:57.095173] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.548 [2024-12-06 06:47:57.098059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.548 [2024-12-06 06:47:57.098101] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:38.548 pt3 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.548 malloc4 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.548 [2024-12-06 06:47:57.151785] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:38.548 [2024-12-06 06:47:57.151858] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.548 [2024-12-06 06:47:57.151891] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:38.548 [2024-12-06 06:47:57.151907] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.548 [2024-12-06 06:47:57.154826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.548 [2024-12-06 06:47:57.154867] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:38.548 pt4 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.548 [2024-12-06 06:47:57.163832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:38.548 [2024-12-06 06:47:57.166302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:38.548 [2024-12-06 06:47:57.166429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:38.548 [2024-12-06 06:47:57.166502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:38.548 [2024-12-06 06:47:57.166808] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:38.548 [2024-12-06 06:47:57.166840] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:38.548 [2024-12-06 06:47:57.167192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:38.548 [2024-12-06 06:47:57.173965] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:38.548 [2024-12-06 06:47:57.174001] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:38.548 [2024-12-06 06:47:57.174279] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.548 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.915 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.915 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:38.915 "name": "raid_bdev1", 00:21:38.915 "uuid": "7146d6fb-4a43-4721-a236-eb27144bec52", 00:21:38.915 "strip_size_kb": 64, 00:21:38.915 "state": "online", 00:21:38.915 "raid_level": "raid5f", 00:21:38.915 "superblock": true, 00:21:38.915 "num_base_bdevs": 4, 00:21:38.915 "num_base_bdevs_discovered": 4, 00:21:38.915 "num_base_bdevs_operational": 4, 00:21:38.915 "base_bdevs_list": [ 00:21:38.915 { 00:21:38.915 "name": "pt1", 00:21:38.915 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:38.915 "is_configured": true, 00:21:38.915 "data_offset": 2048, 00:21:38.915 "data_size": 63488 00:21:38.915 }, 00:21:38.915 { 00:21:38.915 "name": "pt2", 00:21:38.915 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:38.915 "is_configured": true, 00:21:38.915 "data_offset": 2048, 00:21:38.915 "data_size": 63488 00:21:38.916 }, 00:21:38.916 { 00:21:38.916 "name": "pt3", 00:21:38.916 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:38.916 "is_configured": true, 00:21:38.916 "data_offset": 2048, 00:21:38.916 "data_size": 63488 00:21:38.916 }, 00:21:38.916 { 00:21:38.916 "name": "pt4", 00:21:38.916 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:38.916 "is_configured": true, 00:21:38.916 "data_offset": 2048, 00:21:38.916 "data_size": 63488 00:21:38.916 } 00:21:38.916 ] 00:21:38.916 }' 00:21:38.916 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:38.916 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.175 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:39.175 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:39.175 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:39.175 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:39.175 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:39.175 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:39.175 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:39.175 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.175 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:39.175 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.175 [2024-12-06 06:47:57.666109] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:39.175 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.175 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:39.175 "name": "raid_bdev1", 00:21:39.175 "aliases": [ 00:21:39.175 "7146d6fb-4a43-4721-a236-eb27144bec52" 00:21:39.175 ], 00:21:39.175 "product_name": "Raid Volume", 00:21:39.175 "block_size": 512, 00:21:39.175 "num_blocks": 190464, 00:21:39.175 "uuid": "7146d6fb-4a43-4721-a236-eb27144bec52", 00:21:39.175 "assigned_rate_limits": { 00:21:39.175 "rw_ios_per_sec": 0, 00:21:39.175 "rw_mbytes_per_sec": 0, 00:21:39.175 "r_mbytes_per_sec": 0, 00:21:39.175 "w_mbytes_per_sec": 0 00:21:39.175 }, 00:21:39.175 "claimed": false, 00:21:39.175 "zoned": false, 00:21:39.175 "supported_io_types": { 00:21:39.175 "read": true, 00:21:39.175 "write": true, 00:21:39.175 "unmap": false, 00:21:39.175 "flush": false, 00:21:39.175 "reset": true, 00:21:39.175 "nvme_admin": false, 00:21:39.175 "nvme_io": false, 00:21:39.175 "nvme_io_md": false, 00:21:39.175 "write_zeroes": true, 00:21:39.175 "zcopy": false, 00:21:39.175 "get_zone_info": false, 00:21:39.175 "zone_management": false, 00:21:39.175 "zone_append": false, 00:21:39.175 "compare": false, 00:21:39.175 "compare_and_write": false, 00:21:39.175 "abort": false, 00:21:39.175 "seek_hole": false, 00:21:39.175 "seek_data": false, 00:21:39.175 "copy": false, 00:21:39.175 "nvme_iov_md": false 00:21:39.175 }, 00:21:39.175 "driver_specific": { 00:21:39.175 "raid": { 00:21:39.175 "uuid": "7146d6fb-4a43-4721-a236-eb27144bec52", 00:21:39.175 "strip_size_kb": 64, 00:21:39.175 "state": "online", 00:21:39.175 "raid_level": "raid5f", 00:21:39.175 "superblock": true, 00:21:39.175 "num_base_bdevs": 4, 00:21:39.175 "num_base_bdevs_discovered": 4, 00:21:39.175 "num_base_bdevs_operational": 4, 00:21:39.176 "base_bdevs_list": [ 00:21:39.176 { 00:21:39.176 "name": "pt1", 00:21:39.176 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:39.176 "is_configured": true, 00:21:39.176 "data_offset": 2048, 00:21:39.176 "data_size": 63488 00:21:39.176 }, 00:21:39.176 { 00:21:39.176 "name": "pt2", 00:21:39.176 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:39.176 "is_configured": true, 00:21:39.176 "data_offset": 2048, 00:21:39.176 "data_size": 63488 00:21:39.176 }, 00:21:39.176 { 00:21:39.176 "name": "pt3", 00:21:39.176 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:39.176 "is_configured": true, 00:21:39.176 "data_offset": 2048, 00:21:39.176 "data_size": 63488 00:21:39.176 }, 00:21:39.176 { 00:21:39.176 "name": "pt4", 00:21:39.176 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:39.176 "is_configured": true, 00:21:39.176 "data_offset": 2048, 00:21:39.176 "data_size": 63488 00:21:39.176 } 00:21:39.176 ] 00:21:39.176 } 00:21:39.176 } 00:21:39.176 }' 00:21:39.176 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:39.176 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:39.176 pt2 00:21:39.176 pt3 00:21:39.176 pt4' 00:21:39.176 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:39.176 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:39.176 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:39.176 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:39.176 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:39.176 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.176 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.176 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.434 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:39.434 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:39.434 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:39.434 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:39.434 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.434 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.434 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:39.434 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.434 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:39.434 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:39.434 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:39.434 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:39.434 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:21:39.434 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.434 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.434 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.434 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:39.434 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:39.434 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:39.434 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:21:39.434 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.434 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.434 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:39.434 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.434 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:39.434 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:39.434 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:39.434 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.434 06:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.434 06:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:39.434 [2024-12-06 06:47:57.990151] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:39.434 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.434 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7146d6fb-4a43-4721-a236-eb27144bec52 00:21:39.434 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7146d6fb-4a43-4721-a236-eb27144bec52 ']' 00:21:39.434 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:39.434 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.434 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.434 [2024-12-06 06:47:58.041956] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:39.434 [2024-12-06 06:47:58.041992] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:39.434 [2024-12-06 06:47:58.042098] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:39.434 [2024-12-06 06:47:58.042208] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:39.434 [2024-12-06 06:47:58.042242] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:39.434 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.434 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.434 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:39.434 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.434 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.434 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.694 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:39.694 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:39.694 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:39.694 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:39.694 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.694 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.694 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.694 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:39.694 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:39.694 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.694 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.694 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.694 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:39.694 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:21:39.694 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.694 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.694 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.694 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:39.694 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:21:39.694 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.694 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.694 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.694 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:39.694 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.694 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:39.694 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.695 [2024-12-06 06:47:58.198028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:39.695 [2024-12-06 06:47:58.200468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:39.695 [2024-12-06 06:47:58.200559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:39.695 [2024-12-06 06:47:58.200617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:21:39.695 [2024-12-06 06:47:58.200693] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:39.695 [2024-12-06 06:47:58.200763] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:39.695 [2024-12-06 06:47:58.200796] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:39.695 [2024-12-06 06:47:58.200826] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:21:39.695 [2024-12-06 06:47:58.200848] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:39.695 [2024-12-06 06:47:58.200864] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:39.695 request: 00:21:39.695 { 00:21:39.695 "name": "raid_bdev1", 00:21:39.695 "raid_level": "raid5f", 00:21:39.695 "base_bdevs": [ 00:21:39.695 "malloc1", 00:21:39.695 "malloc2", 00:21:39.695 "malloc3", 00:21:39.695 "malloc4" 00:21:39.695 ], 00:21:39.695 "strip_size_kb": 64, 00:21:39.695 "superblock": false, 00:21:39.695 "method": "bdev_raid_create", 00:21:39.695 "req_id": 1 00:21:39.695 } 00:21:39.695 Got JSON-RPC error response 00:21:39.695 response: 00:21:39.695 { 00:21:39.695 "code": -17, 00:21:39.695 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:39.695 } 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.695 [2024-12-06 06:47:58.262001] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:39.695 [2024-12-06 06:47:58.262076] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:39.695 [2024-12-06 06:47:58.262103] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:39.695 [2024-12-06 06:47:58.262120] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:39.695 [2024-12-06 06:47:58.265090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:39.695 [2024-12-06 06:47:58.265137] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:39.695 [2024-12-06 06:47:58.265248] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:39.695 [2024-12-06 06:47:58.265327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:39.695 pt1 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.695 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:39.696 "name": "raid_bdev1", 00:21:39.696 "uuid": "7146d6fb-4a43-4721-a236-eb27144bec52", 00:21:39.696 "strip_size_kb": 64, 00:21:39.696 "state": "configuring", 00:21:39.696 "raid_level": "raid5f", 00:21:39.696 "superblock": true, 00:21:39.696 "num_base_bdevs": 4, 00:21:39.696 "num_base_bdevs_discovered": 1, 00:21:39.696 "num_base_bdevs_operational": 4, 00:21:39.696 "base_bdevs_list": [ 00:21:39.696 { 00:21:39.696 "name": "pt1", 00:21:39.696 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:39.696 "is_configured": true, 00:21:39.696 "data_offset": 2048, 00:21:39.696 "data_size": 63488 00:21:39.696 }, 00:21:39.696 { 00:21:39.696 "name": null, 00:21:39.696 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:39.696 "is_configured": false, 00:21:39.696 "data_offset": 2048, 00:21:39.696 "data_size": 63488 00:21:39.696 }, 00:21:39.696 { 00:21:39.696 "name": null, 00:21:39.696 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:39.696 "is_configured": false, 00:21:39.696 "data_offset": 2048, 00:21:39.696 "data_size": 63488 00:21:39.696 }, 00:21:39.696 { 00:21:39.696 "name": null, 00:21:39.696 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:39.696 "is_configured": false, 00:21:39.696 "data_offset": 2048, 00:21:39.696 "data_size": 63488 00:21:39.696 } 00:21:39.696 ] 00:21:39.696 }' 00:21:39.696 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:39.696 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.265 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:21:40.265 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:40.265 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.265 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.265 [2024-12-06 06:47:58.758148] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:40.265 [2024-12-06 06:47:58.758238] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.265 [2024-12-06 06:47:58.758267] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:40.265 [2024-12-06 06:47:58.758284] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.265 [2024-12-06 06:47:58.758877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.265 [2024-12-06 06:47:58.758939] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:40.265 [2024-12-06 06:47:58.759043] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:40.265 [2024-12-06 06:47:58.759080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:40.265 pt2 00:21:40.265 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.265 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:21:40.265 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.265 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.265 [2024-12-06 06:47:58.770196] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:40.265 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.265 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:21:40.265 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:40.265 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:40.265 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:40.265 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:40.265 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:40.265 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:40.265 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:40.265 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:40.265 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:40.265 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.265 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.265 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.265 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.265 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.265 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:40.265 "name": "raid_bdev1", 00:21:40.265 "uuid": "7146d6fb-4a43-4721-a236-eb27144bec52", 00:21:40.265 "strip_size_kb": 64, 00:21:40.265 "state": "configuring", 00:21:40.265 "raid_level": "raid5f", 00:21:40.265 "superblock": true, 00:21:40.265 "num_base_bdevs": 4, 00:21:40.265 "num_base_bdevs_discovered": 1, 00:21:40.265 "num_base_bdevs_operational": 4, 00:21:40.265 "base_bdevs_list": [ 00:21:40.265 { 00:21:40.265 "name": "pt1", 00:21:40.265 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:40.265 "is_configured": true, 00:21:40.265 "data_offset": 2048, 00:21:40.265 "data_size": 63488 00:21:40.265 }, 00:21:40.265 { 00:21:40.265 "name": null, 00:21:40.265 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:40.265 "is_configured": false, 00:21:40.265 "data_offset": 0, 00:21:40.265 "data_size": 63488 00:21:40.265 }, 00:21:40.265 { 00:21:40.265 "name": null, 00:21:40.265 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:40.265 "is_configured": false, 00:21:40.265 "data_offset": 2048, 00:21:40.265 "data_size": 63488 00:21:40.265 }, 00:21:40.265 { 00:21:40.265 "name": null, 00:21:40.265 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:40.265 "is_configured": false, 00:21:40.265 "data_offset": 2048, 00:21:40.265 "data_size": 63488 00:21:40.265 } 00:21:40.265 ] 00:21:40.265 }' 00:21:40.265 06:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:40.265 06:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.833 [2024-12-06 06:47:59.306296] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:40.833 [2024-12-06 06:47:59.306375] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.833 [2024-12-06 06:47:59.306405] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:40.833 [2024-12-06 06:47:59.306419] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.833 [2024-12-06 06:47:59.307024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.833 [2024-12-06 06:47:59.307050] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:40.833 [2024-12-06 06:47:59.307154] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:40.833 [2024-12-06 06:47:59.307184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:40.833 pt2 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.833 [2024-12-06 06:47:59.318303] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:40.833 [2024-12-06 06:47:59.318371] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.833 [2024-12-06 06:47:59.318408] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:40.833 [2024-12-06 06:47:59.318425] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.833 [2024-12-06 06:47:59.318992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.833 [2024-12-06 06:47:59.319023] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:40.833 [2024-12-06 06:47:59.319124] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:40.833 [2024-12-06 06:47:59.319163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:40.833 pt3 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.833 [2024-12-06 06:47:59.330242] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:40.833 [2024-12-06 06:47:59.330290] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.833 [2024-12-06 06:47:59.330316] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:40.833 [2024-12-06 06:47:59.330329] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.833 [2024-12-06 06:47:59.330860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.833 [2024-12-06 06:47:59.330892] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:40.833 [2024-12-06 06:47:59.330998] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:40.833 [2024-12-06 06:47:59.331030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:40.833 [2024-12-06 06:47:59.331207] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:40.833 [2024-12-06 06:47:59.331222] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:40.833 [2024-12-06 06:47:59.331592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:40.833 [2024-12-06 06:47:59.338029] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:40.833 [2024-12-06 06:47:59.338079] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:40.833 [2024-12-06 06:47:59.338285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:40.833 pt4 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.833 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:40.833 "name": "raid_bdev1", 00:21:40.833 "uuid": "7146d6fb-4a43-4721-a236-eb27144bec52", 00:21:40.833 "strip_size_kb": 64, 00:21:40.833 "state": "online", 00:21:40.833 "raid_level": "raid5f", 00:21:40.833 "superblock": true, 00:21:40.833 "num_base_bdevs": 4, 00:21:40.833 "num_base_bdevs_discovered": 4, 00:21:40.833 "num_base_bdevs_operational": 4, 00:21:40.833 "base_bdevs_list": [ 00:21:40.833 { 00:21:40.833 "name": "pt1", 00:21:40.833 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:40.833 "is_configured": true, 00:21:40.833 "data_offset": 2048, 00:21:40.833 "data_size": 63488 00:21:40.833 }, 00:21:40.833 { 00:21:40.833 "name": "pt2", 00:21:40.833 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:40.833 "is_configured": true, 00:21:40.833 "data_offset": 2048, 00:21:40.833 "data_size": 63488 00:21:40.833 }, 00:21:40.833 { 00:21:40.833 "name": "pt3", 00:21:40.833 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:40.833 "is_configured": true, 00:21:40.833 "data_offset": 2048, 00:21:40.833 "data_size": 63488 00:21:40.833 }, 00:21:40.833 { 00:21:40.833 "name": "pt4", 00:21:40.833 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:40.834 "is_configured": true, 00:21:40.834 "data_offset": 2048, 00:21:40.834 "data_size": 63488 00:21:40.834 } 00:21:40.834 ] 00:21:40.834 }' 00:21:40.834 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:40.834 06:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.401 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:41.401 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:41.401 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:41.401 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:41.401 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:41.401 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:41.401 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:41.401 06:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.401 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:41.401 06:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.401 [2024-12-06 06:47:59.854054] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:41.401 06:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.401 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:41.401 "name": "raid_bdev1", 00:21:41.401 "aliases": [ 00:21:41.401 "7146d6fb-4a43-4721-a236-eb27144bec52" 00:21:41.401 ], 00:21:41.401 "product_name": "Raid Volume", 00:21:41.401 "block_size": 512, 00:21:41.401 "num_blocks": 190464, 00:21:41.401 "uuid": "7146d6fb-4a43-4721-a236-eb27144bec52", 00:21:41.401 "assigned_rate_limits": { 00:21:41.401 "rw_ios_per_sec": 0, 00:21:41.401 "rw_mbytes_per_sec": 0, 00:21:41.401 "r_mbytes_per_sec": 0, 00:21:41.401 "w_mbytes_per_sec": 0 00:21:41.401 }, 00:21:41.401 "claimed": false, 00:21:41.401 "zoned": false, 00:21:41.401 "supported_io_types": { 00:21:41.401 "read": true, 00:21:41.401 "write": true, 00:21:41.401 "unmap": false, 00:21:41.401 "flush": false, 00:21:41.401 "reset": true, 00:21:41.401 "nvme_admin": false, 00:21:41.401 "nvme_io": false, 00:21:41.401 "nvme_io_md": false, 00:21:41.401 "write_zeroes": true, 00:21:41.401 "zcopy": false, 00:21:41.401 "get_zone_info": false, 00:21:41.401 "zone_management": false, 00:21:41.401 "zone_append": false, 00:21:41.401 "compare": false, 00:21:41.401 "compare_and_write": false, 00:21:41.401 "abort": false, 00:21:41.401 "seek_hole": false, 00:21:41.401 "seek_data": false, 00:21:41.401 "copy": false, 00:21:41.401 "nvme_iov_md": false 00:21:41.401 }, 00:21:41.401 "driver_specific": { 00:21:41.401 "raid": { 00:21:41.401 "uuid": "7146d6fb-4a43-4721-a236-eb27144bec52", 00:21:41.401 "strip_size_kb": 64, 00:21:41.401 "state": "online", 00:21:41.401 "raid_level": "raid5f", 00:21:41.401 "superblock": true, 00:21:41.401 "num_base_bdevs": 4, 00:21:41.401 "num_base_bdevs_discovered": 4, 00:21:41.401 "num_base_bdevs_operational": 4, 00:21:41.401 "base_bdevs_list": [ 00:21:41.401 { 00:21:41.401 "name": "pt1", 00:21:41.401 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:41.401 "is_configured": true, 00:21:41.401 "data_offset": 2048, 00:21:41.401 "data_size": 63488 00:21:41.401 }, 00:21:41.401 { 00:21:41.401 "name": "pt2", 00:21:41.401 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:41.401 "is_configured": true, 00:21:41.401 "data_offset": 2048, 00:21:41.401 "data_size": 63488 00:21:41.401 }, 00:21:41.401 { 00:21:41.401 "name": "pt3", 00:21:41.401 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:41.401 "is_configured": true, 00:21:41.401 "data_offset": 2048, 00:21:41.401 "data_size": 63488 00:21:41.401 }, 00:21:41.401 { 00:21:41.401 "name": "pt4", 00:21:41.401 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:41.401 "is_configured": true, 00:21:41.401 "data_offset": 2048, 00:21:41.401 "data_size": 63488 00:21:41.401 } 00:21:41.401 ] 00:21:41.401 } 00:21:41.401 } 00:21:41.401 }' 00:21:41.401 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:41.401 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:41.401 pt2 00:21:41.401 pt3 00:21:41.401 pt4' 00:21:41.401 06:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:41.401 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:41.401 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:41.401 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:41.401 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.401 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:41.401 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:41.660 [2024-12-06 06:48:00.262118] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:41.660 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.919 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7146d6fb-4a43-4721-a236-eb27144bec52 '!=' 7146d6fb-4a43-4721-a236-eb27144bec52 ']' 00:21:41.919 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:21:41.919 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:41.919 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:41.919 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:41.919 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.919 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.919 [2024-12-06 06:48:00.314006] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:41.919 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.919 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:41.919 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:41.919 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:41.919 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:41.919 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:41.919 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:41.919 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:41.919 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:41.919 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:41.919 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:41.919 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.919 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.919 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.919 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.919 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.919 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:41.919 "name": "raid_bdev1", 00:21:41.919 "uuid": "7146d6fb-4a43-4721-a236-eb27144bec52", 00:21:41.919 "strip_size_kb": 64, 00:21:41.919 "state": "online", 00:21:41.919 "raid_level": "raid5f", 00:21:41.919 "superblock": true, 00:21:41.919 "num_base_bdevs": 4, 00:21:41.919 "num_base_bdevs_discovered": 3, 00:21:41.919 "num_base_bdevs_operational": 3, 00:21:41.919 "base_bdevs_list": [ 00:21:41.919 { 00:21:41.919 "name": null, 00:21:41.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.919 "is_configured": false, 00:21:41.919 "data_offset": 0, 00:21:41.919 "data_size": 63488 00:21:41.919 }, 00:21:41.919 { 00:21:41.919 "name": "pt2", 00:21:41.919 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:41.919 "is_configured": true, 00:21:41.919 "data_offset": 2048, 00:21:41.919 "data_size": 63488 00:21:41.919 }, 00:21:41.919 { 00:21:41.919 "name": "pt3", 00:21:41.919 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:41.919 "is_configured": true, 00:21:41.919 "data_offset": 2048, 00:21:41.919 "data_size": 63488 00:21:41.919 }, 00:21:41.919 { 00:21:41.919 "name": "pt4", 00:21:41.919 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:41.919 "is_configured": true, 00:21:41.919 "data_offset": 2048, 00:21:41.919 "data_size": 63488 00:21:41.919 } 00:21:41.919 ] 00:21:41.919 }' 00:21:41.919 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:41.919 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.486 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:42.486 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.487 [2024-12-06 06:48:00.846063] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:42.487 [2024-12-06 06:48:00.846105] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:42.487 [2024-12-06 06:48:00.846208] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:42.487 [2024-12-06 06:48:00.846312] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:42.487 [2024-12-06 06:48:00.846328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.487 [2024-12-06 06:48:00.938092] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:42.487 [2024-12-06 06:48:00.938167] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:42.487 [2024-12-06 06:48:00.938200] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:21:42.487 [2024-12-06 06:48:00.938214] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:42.487 [2024-12-06 06:48:00.941111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:42.487 [2024-12-06 06:48:00.941153] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:42.487 [2024-12-06 06:48:00.941281] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:42.487 [2024-12-06 06:48:00.941342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:42.487 pt2 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.487 06:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.487 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.487 "name": "raid_bdev1", 00:21:42.487 "uuid": "7146d6fb-4a43-4721-a236-eb27144bec52", 00:21:42.487 "strip_size_kb": 64, 00:21:42.487 "state": "configuring", 00:21:42.487 "raid_level": "raid5f", 00:21:42.487 "superblock": true, 00:21:42.487 "num_base_bdevs": 4, 00:21:42.487 "num_base_bdevs_discovered": 1, 00:21:42.487 "num_base_bdevs_operational": 3, 00:21:42.487 "base_bdevs_list": [ 00:21:42.487 { 00:21:42.487 "name": null, 00:21:42.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.487 "is_configured": false, 00:21:42.487 "data_offset": 2048, 00:21:42.487 "data_size": 63488 00:21:42.487 }, 00:21:42.487 { 00:21:42.487 "name": "pt2", 00:21:42.487 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:42.487 "is_configured": true, 00:21:42.487 "data_offset": 2048, 00:21:42.487 "data_size": 63488 00:21:42.487 }, 00:21:42.487 { 00:21:42.487 "name": null, 00:21:42.487 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:42.487 "is_configured": false, 00:21:42.487 "data_offset": 2048, 00:21:42.487 "data_size": 63488 00:21:42.487 }, 00:21:42.487 { 00:21:42.487 "name": null, 00:21:42.487 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:42.487 "is_configured": false, 00:21:42.487 "data_offset": 2048, 00:21:42.487 "data_size": 63488 00:21:42.487 } 00:21:42.487 ] 00:21:42.487 }' 00:21:42.487 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.487 06:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.054 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:21:43.054 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:43.054 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:43.054 06:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.054 06:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.054 [2024-12-06 06:48:01.446245] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:43.054 [2024-12-06 06:48:01.446347] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:43.054 [2024-12-06 06:48:01.446385] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:21:43.054 [2024-12-06 06:48:01.446400] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:43.054 [2024-12-06 06:48:01.446975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:43.054 [2024-12-06 06:48:01.447000] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:43.054 [2024-12-06 06:48:01.447107] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:43.054 [2024-12-06 06:48:01.447140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:43.054 pt3 00:21:43.054 06:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.054 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:21:43.054 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:43.054 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:43.054 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:43.054 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:43.054 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:43.054 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.054 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.054 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.054 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.054 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.054 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.054 06:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.054 06:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.054 06:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.054 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.054 "name": "raid_bdev1", 00:21:43.054 "uuid": "7146d6fb-4a43-4721-a236-eb27144bec52", 00:21:43.054 "strip_size_kb": 64, 00:21:43.054 "state": "configuring", 00:21:43.054 "raid_level": "raid5f", 00:21:43.054 "superblock": true, 00:21:43.054 "num_base_bdevs": 4, 00:21:43.054 "num_base_bdevs_discovered": 2, 00:21:43.054 "num_base_bdevs_operational": 3, 00:21:43.054 "base_bdevs_list": [ 00:21:43.054 { 00:21:43.054 "name": null, 00:21:43.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.054 "is_configured": false, 00:21:43.054 "data_offset": 2048, 00:21:43.054 "data_size": 63488 00:21:43.054 }, 00:21:43.054 { 00:21:43.054 "name": "pt2", 00:21:43.054 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:43.054 "is_configured": true, 00:21:43.054 "data_offset": 2048, 00:21:43.054 "data_size": 63488 00:21:43.054 }, 00:21:43.054 { 00:21:43.054 "name": "pt3", 00:21:43.054 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:43.054 "is_configured": true, 00:21:43.054 "data_offset": 2048, 00:21:43.054 "data_size": 63488 00:21:43.054 }, 00:21:43.054 { 00:21:43.054 "name": null, 00:21:43.054 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:43.055 "is_configured": false, 00:21:43.055 "data_offset": 2048, 00:21:43.055 "data_size": 63488 00:21:43.055 } 00:21:43.055 ] 00:21:43.055 }' 00:21:43.055 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.055 06:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.314 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:21:43.314 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:43.314 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:21:43.314 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:43.314 06:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.573 06:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.573 [2024-12-06 06:48:01.962412] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:43.573 [2024-12-06 06:48:01.962486] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:43.573 [2024-12-06 06:48:01.962517] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:21:43.573 [2024-12-06 06:48:01.962547] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:43.573 [2024-12-06 06:48:01.963132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:43.573 [2024-12-06 06:48:01.963164] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:43.573 [2024-12-06 06:48:01.963269] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:43.573 [2024-12-06 06:48:01.963308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:43.573 [2024-12-06 06:48:01.963475] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:43.573 [2024-12-06 06:48:01.963491] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:43.573 [2024-12-06 06:48:01.963814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:21:43.573 [2024-12-06 06:48:01.970379] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:43.573 [2024-12-06 06:48:01.970420] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:43.573 [2024-12-06 06:48:01.970865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:43.573 pt4 00:21:43.573 06:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.573 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:43.573 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:43.573 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:43.573 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:43.573 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:43.573 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:43.573 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.573 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.573 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.573 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.573 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.573 06:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.573 06:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.573 06:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.573 06:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.573 06:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.573 "name": "raid_bdev1", 00:21:43.573 "uuid": "7146d6fb-4a43-4721-a236-eb27144bec52", 00:21:43.573 "strip_size_kb": 64, 00:21:43.573 "state": "online", 00:21:43.573 "raid_level": "raid5f", 00:21:43.573 "superblock": true, 00:21:43.573 "num_base_bdevs": 4, 00:21:43.573 "num_base_bdevs_discovered": 3, 00:21:43.573 "num_base_bdevs_operational": 3, 00:21:43.573 "base_bdevs_list": [ 00:21:43.573 { 00:21:43.573 "name": null, 00:21:43.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.573 "is_configured": false, 00:21:43.573 "data_offset": 2048, 00:21:43.573 "data_size": 63488 00:21:43.573 }, 00:21:43.573 { 00:21:43.573 "name": "pt2", 00:21:43.573 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:43.573 "is_configured": true, 00:21:43.573 "data_offset": 2048, 00:21:43.573 "data_size": 63488 00:21:43.573 }, 00:21:43.573 { 00:21:43.573 "name": "pt3", 00:21:43.573 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:43.573 "is_configured": true, 00:21:43.573 "data_offset": 2048, 00:21:43.573 "data_size": 63488 00:21:43.573 }, 00:21:43.573 { 00:21:43.573 "name": "pt4", 00:21:43.573 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:43.573 "is_configured": true, 00:21:43.573 "data_offset": 2048, 00:21:43.573 "data_size": 63488 00:21:43.573 } 00:21:43.573 ] 00:21:43.573 }' 00:21:43.573 06:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.573 06:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.141 [2024-12-06 06:48:02.538433] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:44.141 [2024-12-06 06:48:02.538476] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:44.141 [2024-12-06 06:48:02.538595] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:44.141 [2024-12-06 06:48:02.538692] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:44.141 [2024-12-06 06:48:02.538724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.141 [2024-12-06 06:48:02.610430] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:44.141 [2024-12-06 06:48:02.610506] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:44.141 [2024-12-06 06:48:02.610558] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:21:44.141 [2024-12-06 06:48:02.610580] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:44.141 [2024-12-06 06:48:02.613624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:44.141 [2024-12-06 06:48:02.613670] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:44.141 [2024-12-06 06:48:02.613777] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:44.141 [2024-12-06 06:48:02.613840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:44.141 [2024-12-06 06:48:02.614012] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:44.141 [2024-12-06 06:48:02.614059] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:44.141 [2024-12-06 06:48:02.614083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:44.141 [2024-12-06 06:48:02.614160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:44.141 [2024-12-06 06:48:02.614304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:44.141 pt1 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:44.141 06:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.142 06:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.142 06:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.142 06:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.142 06:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.142 06:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:44.142 "name": "raid_bdev1", 00:21:44.142 "uuid": "7146d6fb-4a43-4721-a236-eb27144bec52", 00:21:44.142 "strip_size_kb": 64, 00:21:44.142 "state": "configuring", 00:21:44.142 "raid_level": "raid5f", 00:21:44.142 "superblock": true, 00:21:44.142 "num_base_bdevs": 4, 00:21:44.142 "num_base_bdevs_discovered": 2, 00:21:44.142 "num_base_bdevs_operational": 3, 00:21:44.142 "base_bdevs_list": [ 00:21:44.142 { 00:21:44.142 "name": null, 00:21:44.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.142 "is_configured": false, 00:21:44.142 "data_offset": 2048, 00:21:44.142 "data_size": 63488 00:21:44.142 }, 00:21:44.142 { 00:21:44.142 "name": "pt2", 00:21:44.142 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:44.142 "is_configured": true, 00:21:44.142 "data_offset": 2048, 00:21:44.142 "data_size": 63488 00:21:44.142 }, 00:21:44.142 { 00:21:44.142 "name": "pt3", 00:21:44.142 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:44.142 "is_configured": true, 00:21:44.142 "data_offset": 2048, 00:21:44.142 "data_size": 63488 00:21:44.142 }, 00:21:44.142 { 00:21:44.142 "name": null, 00:21:44.142 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:44.142 "is_configured": false, 00:21:44.142 "data_offset": 2048, 00:21:44.142 "data_size": 63488 00:21:44.142 } 00:21:44.142 ] 00:21:44.142 }' 00:21:44.142 06:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:44.142 06:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.710 06:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:44.710 06:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:21:44.710 06:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.710 06:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.710 06:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.710 06:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:21:44.710 06:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:44.710 06:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.710 06:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.710 [2024-12-06 06:48:03.214689] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:44.710 [2024-12-06 06:48:03.214761] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:44.710 [2024-12-06 06:48:03.214804] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:21:44.710 [2024-12-06 06:48:03.214818] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:44.710 [2024-12-06 06:48:03.215411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:44.710 [2024-12-06 06:48:03.215444] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:44.710 [2024-12-06 06:48:03.215571] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:44.710 [2024-12-06 06:48:03.215605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:44.710 [2024-12-06 06:48:03.215789] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:44.710 [2024-12-06 06:48:03.215805] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:44.710 [2024-12-06 06:48:03.216111] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:44.710 [2024-12-06 06:48:03.222635] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:44.711 [2024-12-06 06:48:03.222669] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:44.711 [2024-12-06 06:48:03.223038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:44.711 pt4 00:21:44.711 06:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.711 06:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:44.711 06:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:44.711 06:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:44.711 06:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:44.711 06:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:44.711 06:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:44.711 06:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:44.711 06:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:44.711 06:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:44.711 06:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:44.711 06:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.711 06:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.711 06:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.711 06:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.711 06:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.711 06:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:44.711 "name": "raid_bdev1", 00:21:44.711 "uuid": "7146d6fb-4a43-4721-a236-eb27144bec52", 00:21:44.711 "strip_size_kb": 64, 00:21:44.711 "state": "online", 00:21:44.711 "raid_level": "raid5f", 00:21:44.711 "superblock": true, 00:21:44.711 "num_base_bdevs": 4, 00:21:44.711 "num_base_bdevs_discovered": 3, 00:21:44.711 "num_base_bdevs_operational": 3, 00:21:44.711 "base_bdevs_list": [ 00:21:44.711 { 00:21:44.711 "name": null, 00:21:44.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.711 "is_configured": false, 00:21:44.711 "data_offset": 2048, 00:21:44.711 "data_size": 63488 00:21:44.711 }, 00:21:44.711 { 00:21:44.711 "name": "pt2", 00:21:44.711 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:44.711 "is_configured": true, 00:21:44.711 "data_offset": 2048, 00:21:44.711 "data_size": 63488 00:21:44.711 }, 00:21:44.711 { 00:21:44.711 "name": "pt3", 00:21:44.711 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:44.711 "is_configured": true, 00:21:44.711 "data_offset": 2048, 00:21:44.711 "data_size": 63488 00:21:44.711 }, 00:21:44.711 { 00:21:44.711 "name": "pt4", 00:21:44.711 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:44.711 "is_configured": true, 00:21:44.711 "data_offset": 2048, 00:21:44.711 "data_size": 63488 00:21:44.711 } 00:21:44.711 ] 00:21:44.711 }' 00:21:44.711 06:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:44.711 06:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.278 06:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:45.278 06:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:45.278 06:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.278 06:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.278 06:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.278 06:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:45.278 06:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:45.278 06:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.278 06:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.278 06:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:45.278 [2024-12-06 06:48:03.802743] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:45.278 06:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.278 06:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7146d6fb-4a43-4721-a236-eb27144bec52 '!=' 7146d6fb-4a43-4721-a236-eb27144bec52 ']' 00:21:45.278 06:48:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84704 00:21:45.278 06:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84704 ']' 00:21:45.278 06:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84704 00:21:45.278 06:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:21:45.278 06:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:45.278 06:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84704 00:21:45.278 06:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:45.278 06:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:45.278 killing process with pid 84704 00:21:45.278 06:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84704' 00:21:45.278 06:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84704 00:21:45.278 [2024-12-06 06:48:03.881597] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:45.278 [2024-12-06 06:48:03.881711] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:45.278 06:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84704 00:21:45.278 [2024-12-06 06:48:03.881819] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:45.278 [2024-12-06 06:48:03.881843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:45.956 [2024-12-06 06:48:04.242463] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:46.897 06:48:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:21:46.897 00:21:46.897 real 0m9.486s 00:21:46.897 user 0m15.590s 00:21:46.897 sys 0m1.342s 00:21:46.897 06:48:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:46.897 06:48:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.897 ************************************ 00:21:46.897 END TEST raid5f_superblock_test 00:21:46.897 ************************************ 00:21:46.897 06:48:05 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:21:46.897 06:48:05 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:21:46.897 06:48:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:46.897 06:48:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:46.897 06:48:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:46.897 ************************************ 00:21:46.897 START TEST raid5f_rebuild_test 00:21:46.897 ************************************ 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85195 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85195 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 85195 ']' 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:46.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:46.897 06:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.897 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:46.897 Zero copy mechanism will not be used. 00:21:46.897 [2024-12-06 06:48:05.461110] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:21:46.897 [2024-12-06 06:48:05.461256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85195 ] 00:21:47.155 [2024-12-06 06:48:05.632231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.155 [2024-12-06 06:48:05.763602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.413 [2024-12-06 06:48:05.965664] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:47.413 [2024-12-06 06:48:05.965709] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:47.981 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:47.981 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:21:47.981 06:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:47.981 06:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:47.981 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.981 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.981 BaseBdev1_malloc 00:21:47.981 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.981 06:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:47.981 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.981 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.981 [2024-12-06 06:48:06.556418] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:47.981 [2024-12-06 06:48:06.556494] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:47.981 [2024-12-06 06:48:06.556552] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:47.981 [2024-12-06 06:48:06.556575] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:47.981 [2024-12-06 06:48:06.559311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:47.981 [2024-12-06 06:48:06.559357] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:47.981 BaseBdev1 00:21:47.981 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.981 06:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:47.981 06:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:47.981 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.981 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.981 BaseBdev2_malloc 00:21:47.981 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.981 06:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:47.981 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.981 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.981 [2024-12-06 06:48:06.608940] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:47.981 [2024-12-06 06:48:06.609011] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:47.981 [2024-12-06 06:48:06.609044] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:47.981 [2024-12-06 06:48:06.609063] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:47.981 [2024-12-06 06:48:06.611781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:47.981 [2024-12-06 06:48:06.611825] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:47.981 BaseBdev2 00:21:47.981 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.981 06:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:47.981 06:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:47.981 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.981 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.240 BaseBdev3_malloc 00:21:48.240 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.240 06:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:48.240 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.240 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.240 [2024-12-06 06:48:06.671570] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:48.240 [2024-12-06 06:48:06.671634] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.240 [2024-12-06 06:48:06.671668] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:48.240 [2024-12-06 06:48:06.671688] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.240 [2024-12-06 06:48:06.674393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.240 [2024-12-06 06:48:06.674438] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:48.240 BaseBdev3 00:21:48.240 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.240 06:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:48.240 06:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:48.240 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.240 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.240 BaseBdev4_malloc 00:21:48.240 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.240 06:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:48.240 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.240 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.240 [2024-12-06 06:48:06.723675] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:48.240 [2024-12-06 06:48:06.723746] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.240 [2024-12-06 06:48:06.723777] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:48.240 [2024-12-06 06:48:06.723797] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.240 [2024-12-06 06:48:06.726459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.240 [2024-12-06 06:48:06.726506] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:48.240 BaseBdev4 00:21:48.240 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.240 06:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:21:48.240 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.240 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.240 spare_malloc 00:21:48.240 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.240 06:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:48.240 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.240 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.240 spare_delay 00:21:48.240 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.240 06:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:48.240 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.240 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.241 [2024-12-06 06:48:06.783611] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:48.241 [2024-12-06 06:48:06.783672] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.241 [2024-12-06 06:48:06.783701] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:48.241 [2024-12-06 06:48:06.783721] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.241 [2024-12-06 06:48:06.786418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.241 [2024-12-06 06:48:06.786464] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:48.241 spare 00:21:48.241 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.241 06:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:21:48.241 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.241 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.241 [2024-12-06 06:48:06.791675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:48.241 [2024-12-06 06:48:06.794045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:48.241 [2024-12-06 06:48:06.794134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:48.241 [2024-12-06 06:48:06.794212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:48.241 [2024-12-06 06:48:06.794333] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:48.241 [2024-12-06 06:48:06.794363] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:21:48.241 [2024-12-06 06:48:06.794710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:48.241 [2024-12-06 06:48:06.801420] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:48.241 [2024-12-06 06:48:06.801455] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:48.241 [2024-12-06 06:48:06.801716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:48.241 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.241 06:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:48.241 06:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:48.241 06:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:48.241 06:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:48.241 06:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:48.241 06:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:48.241 06:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:48.241 06:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:48.241 06:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:48.241 06:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:48.241 06:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.241 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.241 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.241 06:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.241 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.241 06:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:48.241 "name": "raid_bdev1", 00:21:48.241 "uuid": "fbb56596-b06a-4718-b736-7f806ee808a6", 00:21:48.241 "strip_size_kb": 64, 00:21:48.241 "state": "online", 00:21:48.241 "raid_level": "raid5f", 00:21:48.241 "superblock": false, 00:21:48.241 "num_base_bdevs": 4, 00:21:48.241 "num_base_bdevs_discovered": 4, 00:21:48.241 "num_base_bdevs_operational": 4, 00:21:48.241 "base_bdevs_list": [ 00:21:48.241 { 00:21:48.241 "name": "BaseBdev1", 00:21:48.241 "uuid": "09064a95-63b5-5178-a45b-4d6eb7a22ed3", 00:21:48.241 "is_configured": true, 00:21:48.241 "data_offset": 0, 00:21:48.241 "data_size": 65536 00:21:48.241 }, 00:21:48.241 { 00:21:48.241 "name": "BaseBdev2", 00:21:48.241 "uuid": "1a318dd1-424f-5228-ba3f-b6aa7ea464b5", 00:21:48.241 "is_configured": true, 00:21:48.241 "data_offset": 0, 00:21:48.241 "data_size": 65536 00:21:48.241 }, 00:21:48.241 { 00:21:48.241 "name": "BaseBdev3", 00:21:48.241 "uuid": "12be4a60-ff97-597b-b89c-0d50bc558587", 00:21:48.241 "is_configured": true, 00:21:48.241 "data_offset": 0, 00:21:48.241 "data_size": 65536 00:21:48.241 }, 00:21:48.241 { 00:21:48.241 "name": "BaseBdev4", 00:21:48.241 "uuid": "ac32b6e8-c260-57dd-a55f-d3a4351589af", 00:21:48.241 "is_configured": true, 00:21:48.241 "data_offset": 0, 00:21:48.241 "data_size": 65536 00:21:48.241 } 00:21:48.241 ] 00:21:48.241 }' 00:21:48.241 06:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:48.241 06:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.808 06:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:48.808 06:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:48.808 06:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.808 06:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.808 [2024-12-06 06:48:07.293504] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:48.808 06:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.808 06:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:21:48.808 06:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:48.808 06:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.808 06:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.808 06:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.808 06:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.808 06:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:21:48.808 06:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:48.808 06:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:48.808 06:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:48.808 06:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:48.808 06:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:48.808 06:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:48.808 06:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:48.808 06:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:48.808 06:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:48.808 06:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:21:48.808 06:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:48.808 06:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:48.808 06:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:49.068 [2024-12-06 06:48:07.629368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:49.068 /dev/nbd0 00:21:49.068 06:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:49.068 06:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:49.068 06:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:49.068 06:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:21:49.068 06:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:49.068 06:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:49.068 06:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:49.068 06:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:21:49.068 06:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:49.068 06:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:49.068 06:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:49.068 1+0 records in 00:21:49.068 1+0 records out 00:21:49.068 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308018 s, 13.3 MB/s 00:21:49.068 06:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:49.068 06:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:21:49.068 06:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:49.068 06:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:49.068 06:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:21:49.068 06:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:49.068 06:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:49.068 06:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:21:49.068 06:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:21:49.068 06:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:21:49.068 06:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:21:50.004 512+0 records in 00:21:50.004 512+0 records out 00:21:50.004 100663296 bytes (101 MB, 96 MiB) copied, 0.652922 s, 154 MB/s 00:21:50.004 06:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:50.004 06:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:50.004 06:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:50.004 06:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:50.004 06:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:21:50.004 06:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:50.004 06:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:50.263 [2024-12-06 06:48:08.676179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:50.263 06:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:50.263 06:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:50.263 06:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:50.263 06:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:50.263 06:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:50.263 06:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:50.263 06:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:50.263 06:48:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:50.263 06:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:50.263 06:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.263 06:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.263 [2024-12-06 06:48:08.691736] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:50.263 06:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.263 06:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:50.263 06:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:50.263 06:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:50.263 06:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:50.263 06:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:50.263 06:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:50.263 06:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:50.263 06:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:50.263 06:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:50.263 06:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:50.263 06:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.263 06:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.264 06:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.264 06:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.264 06:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.264 06:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:50.264 "name": "raid_bdev1", 00:21:50.264 "uuid": "fbb56596-b06a-4718-b736-7f806ee808a6", 00:21:50.264 "strip_size_kb": 64, 00:21:50.264 "state": "online", 00:21:50.264 "raid_level": "raid5f", 00:21:50.264 "superblock": false, 00:21:50.264 "num_base_bdevs": 4, 00:21:50.264 "num_base_bdevs_discovered": 3, 00:21:50.264 "num_base_bdevs_operational": 3, 00:21:50.264 "base_bdevs_list": [ 00:21:50.264 { 00:21:50.264 "name": null, 00:21:50.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.264 "is_configured": false, 00:21:50.264 "data_offset": 0, 00:21:50.264 "data_size": 65536 00:21:50.264 }, 00:21:50.264 { 00:21:50.264 "name": "BaseBdev2", 00:21:50.264 "uuid": "1a318dd1-424f-5228-ba3f-b6aa7ea464b5", 00:21:50.264 "is_configured": true, 00:21:50.264 "data_offset": 0, 00:21:50.264 "data_size": 65536 00:21:50.264 }, 00:21:50.264 { 00:21:50.264 "name": "BaseBdev3", 00:21:50.264 "uuid": "12be4a60-ff97-597b-b89c-0d50bc558587", 00:21:50.264 "is_configured": true, 00:21:50.264 "data_offset": 0, 00:21:50.264 "data_size": 65536 00:21:50.264 }, 00:21:50.264 { 00:21:50.264 "name": "BaseBdev4", 00:21:50.264 "uuid": "ac32b6e8-c260-57dd-a55f-d3a4351589af", 00:21:50.264 "is_configured": true, 00:21:50.264 "data_offset": 0, 00:21:50.264 "data_size": 65536 00:21:50.264 } 00:21:50.264 ] 00:21:50.264 }' 00:21:50.264 06:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:50.264 06:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.830 06:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:50.830 06:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.830 06:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.830 [2024-12-06 06:48:09.179964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:50.830 [2024-12-06 06:48:09.194204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:21:50.830 06:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.830 06:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:50.830 [2024-12-06 06:48:09.203112] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:51.764 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:51.764 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:51.764 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:51.764 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:51.764 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:51.764 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.764 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.764 06:48:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.764 06:48:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.764 06:48:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.764 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:51.764 "name": "raid_bdev1", 00:21:51.764 "uuid": "fbb56596-b06a-4718-b736-7f806ee808a6", 00:21:51.764 "strip_size_kb": 64, 00:21:51.764 "state": "online", 00:21:51.764 "raid_level": "raid5f", 00:21:51.764 "superblock": false, 00:21:51.764 "num_base_bdevs": 4, 00:21:51.764 "num_base_bdevs_discovered": 4, 00:21:51.764 "num_base_bdevs_operational": 4, 00:21:51.764 "process": { 00:21:51.764 "type": "rebuild", 00:21:51.764 "target": "spare", 00:21:51.764 "progress": { 00:21:51.764 "blocks": 17280, 00:21:51.764 "percent": 8 00:21:51.765 } 00:21:51.765 }, 00:21:51.765 "base_bdevs_list": [ 00:21:51.765 { 00:21:51.765 "name": "spare", 00:21:51.765 "uuid": "f8350c74-4337-5f3b-a172-bc61f7b04d16", 00:21:51.765 "is_configured": true, 00:21:51.765 "data_offset": 0, 00:21:51.765 "data_size": 65536 00:21:51.765 }, 00:21:51.765 { 00:21:51.765 "name": "BaseBdev2", 00:21:51.765 "uuid": "1a318dd1-424f-5228-ba3f-b6aa7ea464b5", 00:21:51.765 "is_configured": true, 00:21:51.765 "data_offset": 0, 00:21:51.765 "data_size": 65536 00:21:51.765 }, 00:21:51.765 { 00:21:51.765 "name": "BaseBdev3", 00:21:51.765 "uuid": "12be4a60-ff97-597b-b89c-0d50bc558587", 00:21:51.765 "is_configured": true, 00:21:51.765 "data_offset": 0, 00:21:51.765 "data_size": 65536 00:21:51.765 }, 00:21:51.765 { 00:21:51.765 "name": "BaseBdev4", 00:21:51.765 "uuid": "ac32b6e8-c260-57dd-a55f-d3a4351589af", 00:21:51.765 "is_configured": true, 00:21:51.765 "data_offset": 0, 00:21:51.765 "data_size": 65536 00:21:51.765 } 00:21:51.765 ] 00:21:51.765 }' 00:21:51.765 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:51.765 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:51.765 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:51.765 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:51.765 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:51.765 06:48:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.765 06:48:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.765 [2024-12-06 06:48:10.364785] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:52.023 [2024-12-06 06:48:10.415916] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:52.023 [2024-12-06 06:48:10.416000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:52.023 [2024-12-06 06:48:10.416027] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:52.023 [2024-12-06 06:48:10.416042] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:52.023 06:48:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.023 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:52.023 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:52.023 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:52.023 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:52.023 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:52.023 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:52.023 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:52.023 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:52.023 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:52.023 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:52.023 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.023 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.023 06:48:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.023 06:48:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.023 06:48:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.023 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:52.023 "name": "raid_bdev1", 00:21:52.023 "uuid": "fbb56596-b06a-4718-b736-7f806ee808a6", 00:21:52.023 "strip_size_kb": 64, 00:21:52.023 "state": "online", 00:21:52.023 "raid_level": "raid5f", 00:21:52.023 "superblock": false, 00:21:52.023 "num_base_bdevs": 4, 00:21:52.023 "num_base_bdevs_discovered": 3, 00:21:52.023 "num_base_bdevs_operational": 3, 00:21:52.023 "base_bdevs_list": [ 00:21:52.023 { 00:21:52.023 "name": null, 00:21:52.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.023 "is_configured": false, 00:21:52.023 "data_offset": 0, 00:21:52.023 "data_size": 65536 00:21:52.023 }, 00:21:52.023 { 00:21:52.023 "name": "BaseBdev2", 00:21:52.023 "uuid": "1a318dd1-424f-5228-ba3f-b6aa7ea464b5", 00:21:52.023 "is_configured": true, 00:21:52.023 "data_offset": 0, 00:21:52.023 "data_size": 65536 00:21:52.023 }, 00:21:52.023 { 00:21:52.023 "name": "BaseBdev3", 00:21:52.023 "uuid": "12be4a60-ff97-597b-b89c-0d50bc558587", 00:21:52.023 "is_configured": true, 00:21:52.023 "data_offset": 0, 00:21:52.023 "data_size": 65536 00:21:52.023 }, 00:21:52.023 { 00:21:52.023 "name": "BaseBdev4", 00:21:52.023 "uuid": "ac32b6e8-c260-57dd-a55f-d3a4351589af", 00:21:52.023 "is_configured": true, 00:21:52.023 "data_offset": 0, 00:21:52.023 "data_size": 65536 00:21:52.023 } 00:21:52.023 ] 00:21:52.023 }' 00:21:52.023 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:52.023 06:48:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.620 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:52.620 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:52.620 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:52.620 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:52.620 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:52.620 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.620 06:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.620 06:48:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.620 06:48:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.620 06:48:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.620 06:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:52.620 "name": "raid_bdev1", 00:21:52.620 "uuid": "fbb56596-b06a-4718-b736-7f806ee808a6", 00:21:52.620 "strip_size_kb": 64, 00:21:52.620 "state": "online", 00:21:52.620 "raid_level": "raid5f", 00:21:52.620 "superblock": false, 00:21:52.620 "num_base_bdevs": 4, 00:21:52.620 "num_base_bdevs_discovered": 3, 00:21:52.620 "num_base_bdevs_operational": 3, 00:21:52.620 "base_bdevs_list": [ 00:21:52.620 { 00:21:52.620 "name": null, 00:21:52.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.620 "is_configured": false, 00:21:52.620 "data_offset": 0, 00:21:52.620 "data_size": 65536 00:21:52.620 }, 00:21:52.620 { 00:21:52.620 "name": "BaseBdev2", 00:21:52.620 "uuid": "1a318dd1-424f-5228-ba3f-b6aa7ea464b5", 00:21:52.620 "is_configured": true, 00:21:52.620 "data_offset": 0, 00:21:52.620 "data_size": 65536 00:21:52.620 }, 00:21:52.620 { 00:21:52.620 "name": "BaseBdev3", 00:21:52.620 "uuid": "12be4a60-ff97-597b-b89c-0d50bc558587", 00:21:52.620 "is_configured": true, 00:21:52.620 "data_offset": 0, 00:21:52.620 "data_size": 65536 00:21:52.620 }, 00:21:52.620 { 00:21:52.620 "name": "BaseBdev4", 00:21:52.620 "uuid": "ac32b6e8-c260-57dd-a55f-d3a4351589af", 00:21:52.620 "is_configured": true, 00:21:52.620 "data_offset": 0, 00:21:52.620 "data_size": 65536 00:21:52.620 } 00:21:52.620 ] 00:21:52.620 }' 00:21:52.620 06:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:52.620 06:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:52.620 06:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:52.620 06:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:52.620 06:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:52.620 06:48:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.620 06:48:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.620 [2024-12-06 06:48:11.115021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:52.620 [2024-12-06 06:48:11.128589] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:21:52.620 06:48:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.620 06:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:52.620 [2024-12-06 06:48:11.137656] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:53.556 06:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:53.556 06:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:53.556 06:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:53.556 06:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:53.556 06:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:53.556 06:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.556 06:48:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.556 06:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.556 06:48:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.556 06:48:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.814 06:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:53.814 "name": "raid_bdev1", 00:21:53.814 "uuid": "fbb56596-b06a-4718-b736-7f806ee808a6", 00:21:53.814 "strip_size_kb": 64, 00:21:53.814 "state": "online", 00:21:53.814 "raid_level": "raid5f", 00:21:53.814 "superblock": false, 00:21:53.814 "num_base_bdevs": 4, 00:21:53.814 "num_base_bdevs_discovered": 4, 00:21:53.814 "num_base_bdevs_operational": 4, 00:21:53.814 "process": { 00:21:53.814 "type": "rebuild", 00:21:53.814 "target": "spare", 00:21:53.814 "progress": { 00:21:53.814 "blocks": 17280, 00:21:53.814 "percent": 8 00:21:53.814 } 00:21:53.814 }, 00:21:53.814 "base_bdevs_list": [ 00:21:53.814 { 00:21:53.814 "name": "spare", 00:21:53.814 "uuid": "f8350c74-4337-5f3b-a172-bc61f7b04d16", 00:21:53.814 "is_configured": true, 00:21:53.814 "data_offset": 0, 00:21:53.814 "data_size": 65536 00:21:53.814 }, 00:21:53.814 { 00:21:53.814 "name": "BaseBdev2", 00:21:53.814 "uuid": "1a318dd1-424f-5228-ba3f-b6aa7ea464b5", 00:21:53.814 "is_configured": true, 00:21:53.814 "data_offset": 0, 00:21:53.814 "data_size": 65536 00:21:53.814 }, 00:21:53.814 { 00:21:53.814 "name": "BaseBdev3", 00:21:53.814 "uuid": "12be4a60-ff97-597b-b89c-0d50bc558587", 00:21:53.814 "is_configured": true, 00:21:53.814 "data_offset": 0, 00:21:53.814 "data_size": 65536 00:21:53.814 }, 00:21:53.814 { 00:21:53.814 "name": "BaseBdev4", 00:21:53.814 "uuid": "ac32b6e8-c260-57dd-a55f-d3a4351589af", 00:21:53.814 "is_configured": true, 00:21:53.814 "data_offset": 0, 00:21:53.814 "data_size": 65536 00:21:53.814 } 00:21:53.814 ] 00:21:53.814 }' 00:21:53.814 06:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:53.814 06:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:53.814 06:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:53.814 06:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:53.814 06:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:21:53.814 06:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:21:53.814 06:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:21:53.814 06:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=672 00:21:53.814 06:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:53.814 06:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:53.814 06:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:53.814 06:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:53.814 06:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:53.814 06:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:53.814 06:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.814 06:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.814 06:48:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.814 06:48:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.814 06:48:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.814 06:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:53.814 "name": "raid_bdev1", 00:21:53.814 "uuid": "fbb56596-b06a-4718-b736-7f806ee808a6", 00:21:53.814 "strip_size_kb": 64, 00:21:53.814 "state": "online", 00:21:53.814 "raid_level": "raid5f", 00:21:53.814 "superblock": false, 00:21:53.814 "num_base_bdevs": 4, 00:21:53.814 "num_base_bdevs_discovered": 4, 00:21:53.815 "num_base_bdevs_operational": 4, 00:21:53.815 "process": { 00:21:53.815 "type": "rebuild", 00:21:53.815 "target": "spare", 00:21:53.815 "progress": { 00:21:53.815 "blocks": 21120, 00:21:53.815 "percent": 10 00:21:53.815 } 00:21:53.815 }, 00:21:53.815 "base_bdevs_list": [ 00:21:53.815 { 00:21:53.815 "name": "spare", 00:21:53.815 "uuid": "f8350c74-4337-5f3b-a172-bc61f7b04d16", 00:21:53.815 "is_configured": true, 00:21:53.815 "data_offset": 0, 00:21:53.815 "data_size": 65536 00:21:53.815 }, 00:21:53.815 { 00:21:53.815 "name": "BaseBdev2", 00:21:53.815 "uuid": "1a318dd1-424f-5228-ba3f-b6aa7ea464b5", 00:21:53.815 "is_configured": true, 00:21:53.815 "data_offset": 0, 00:21:53.815 "data_size": 65536 00:21:53.815 }, 00:21:53.815 { 00:21:53.815 "name": "BaseBdev3", 00:21:53.815 "uuid": "12be4a60-ff97-597b-b89c-0d50bc558587", 00:21:53.815 "is_configured": true, 00:21:53.815 "data_offset": 0, 00:21:53.815 "data_size": 65536 00:21:53.815 }, 00:21:53.815 { 00:21:53.815 "name": "BaseBdev4", 00:21:53.815 "uuid": "ac32b6e8-c260-57dd-a55f-d3a4351589af", 00:21:53.815 "is_configured": true, 00:21:53.815 "data_offset": 0, 00:21:53.815 "data_size": 65536 00:21:53.815 } 00:21:53.815 ] 00:21:53.815 }' 00:21:53.815 06:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:53.815 06:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:53.815 06:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:54.073 06:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:54.073 06:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:55.010 06:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:55.010 06:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:55.010 06:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:55.010 06:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:55.010 06:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:55.010 06:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:55.010 06:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.010 06:48:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.010 06:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.010 06:48:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.010 06:48:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.010 06:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:55.010 "name": "raid_bdev1", 00:21:55.010 "uuid": "fbb56596-b06a-4718-b736-7f806ee808a6", 00:21:55.010 "strip_size_kb": 64, 00:21:55.010 "state": "online", 00:21:55.010 "raid_level": "raid5f", 00:21:55.010 "superblock": false, 00:21:55.010 "num_base_bdevs": 4, 00:21:55.010 "num_base_bdevs_discovered": 4, 00:21:55.010 "num_base_bdevs_operational": 4, 00:21:55.010 "process": { 00:21:55.010 "type": "rebuild", 00:21:55.010 "target": "spare", 00:21:55.010 "progress": { 00:21:55.010 "blocks": 44160, 00:21:55.010 "percent": 22 00:21:55.010 } 00:21:55.010 }, 00:21:55.010 "base_bdevs_list": [ 00:21:55.010 { 00:21:55.010 "name": "spare", 00:21:55.010 "uuid": "f8350c74-4337-5f3b-a172-bc61f7b04d16", 00:21:55.010 "is_configured": true, 00:21:55.010 "data_offset": 0, 00:21:55.010 "data_size": 65536 00:21:55.010 }, 00:21:55.010 { 00:21:55.010 "name": "BaseBdev2", 00:21:55.010 "uuid": "1a318dd1-424f-5228-ba3f-b6aa7ea464b5", 00:21:55.010 "is_configured": true, 00:21:55.010 "data_offset": 0, 00:21:55.010 "data_size": 65536 00:21:55.010 }, 00:21:55.010 { 00:21:55.010 "name": "BaseBdev3", 00:21:55.010 "uuid": "12be4a60-ff97-597b-b89c-0d50bc558587", 00:21:55.010 "is_configured": true, 00:21:55.010 "data_offset": 0, 00:21:55.010 "data_size": 65536 00:21:55.010 }, 00:21:55.010 { 00:21:55.010 "name": "BaseBdev4", 00:21:55.010 "uuid": "ac32b6e8-c260-57dd-a55f-d3a4351589af", 00:21:55.010 "is_configured": true, 00:21:55.010 "data_offset": 0, 00:21:55.010 "data_size": 65536 00:21:55.010 } 00:21:55.010 ] 00:21:55.010 }' 00:21:55.010 06:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:55.010 06:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:55.010 06:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:55.010 06:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:55.010 06:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:56.386 06:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:56.386 06:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:56.386 06:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:56.386 06:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:56.386 06:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:56.386 06:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:56.386 06:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.386 06:48:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.386 06:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.386 06:48:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.386 06:48:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.386 06:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:56.386 "name": "raid_bdev1", 00:21:56.386 "uuid": "fbb56596-b06a-4718-b736-7f806ee808a6", 00:21:56.386 "strip_size_kb": 64, 00:21:56.386 "state": "online", 00:21:56.386 "raid_level": "raid5f", 00:21:56.386 "superblock": false, 00:21:56.386 "num_base_bdevs": 4, 00:21:56.386 "num_base_bdevs_discovered": 4, 00:21:56.386 "num_base_bdevs_operational": 4, 00:21:56.386 "process": { 00:21:56.386 "type": "rebuild", 00:21:56.386 "target": "spare", 00:21:56.386 "progress": { 00:21:56.386 "blocks": 65280, 00:21:56.386 "percent": 33 00:21:56.386 } 00:21:56.386 }, 00:21:56.386 "base_bdevs_list": [ 00:21:56.386 { 00:21:56.386 "name": "spare", 00:21:56.386 "uuid": "f8350c74-4337-5f3b-a172-bc61f7b04d16", 00:21:56.386 "is_configured": true, 00:21:56.386 "data_offset": 0, 00:21:56.386 "data_size": 65536 00:21:56.386 }, 00:21:56.386 { 00:21:56.386 "name": "BaseBdev2", 00:21:56.386 "uuid": "1a318dd1-424f-5228-ba3f-b6aa7ea464b5", 00:21:56.386 "is_configured": true, 00:21:56.386 "data_offset": 0, 00:21:56.386 "data_size": 65536 00:21:56.386 }, 00:21:56.386 { 00:21:56.386 "name": "BaseBdev3", 00:21:56.386 "uuid": "12be4a60-ff97-597b-b89c-0d50bc558587", 00:21:56.386 "is_configured": true, 00:21:56.386 "data_offset": 0, 00:21:56.386 "data_size": 65536 00:21:56.386 }, 00:21:56.386 { 00:21:56.386 "name": "BaseBdev4", 00:21:56.386 "uuid": "ac32b6e8-c260-57dd-a55f-d3a4351589af", 00:21:56.386 "is_configured": true, 00:21:56.386 "data_offset": 0, 00:21:56.386 "data_size": 65536 00:21:56.386 } 00:21:56.386 ] 00:21:56.386 }' 00:21:56.386 06:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:56.386 06:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:56.386 06:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:56.386 06:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:56.386 06:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:57.322 06:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:57.322 06:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:57.322 06:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:57.322 06:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:57.322 06:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:57.322 06:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:57.322 06:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.322 06:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.322 06:48:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.322 06:48:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.322 06:48:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.322 06:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:57.322 "name": "raid_bdev1", 00:21:57.322 "uuid": "fbb56596-b06a-4718-b736-7f806ee808a6", 00:21:57.322 "strip_size_kb": 64, 00:21:57.322 "state": "online", 00:21:57.322 "raid_level": "raid5f", 00:21:57.322 "superblock": false, 00:21:57.322 "num_base_bdevs": 4, 00:21:57.322 "num_base_bdevs_discovered": 4, 00:21:57.322 "num_base_bdevs_operational": 4, 00:21:57.322 "process": { 00:21:57.322 "type": "rebuild", 00:21:57.322 "target": "spare", 00:21:57.322 "progress": { 00:21:57.322 "blocks": 88320, 00:21:57.322 "percent": 44 00:21:57.322 } 00:21:57.322 }, 00:21:57.322 "base_bdevs_list": [ 00:21:57.322 { 00:21:57.322 "name": "spare", 00:21:57.322 "uuid": "f8350c74-4337-5f3b-a172-bc61f7b04d16", 00:21:57.322 "is_configured": true, 00:21:57.322 "data_offset": 0, 00:21:57.322 "data_size": 65536 00:21:57.322 }, 00:21:57.322 { 00:21:57.322 "name": "BaseBdev2", 00:21:57.322 "uuid": "1a318dd1-424f-5228-ba3f-b6aa7ea464b5", 00:21:57.322 "is_configured": true, 00:21:57.322 "data_offset": 0, 00:21:57.322 "data_size": 65536 00:21:57.322 }, 00:21:57.322 { 00:21:57.322 "name": "BaseBdev3", 00:21:57.322 "uuid": "12be4a60-ff97-597b-b89c-0d50bc558587", 00:21:57.322 "is_configured": true, 00:21:57.322 "data_offset": 0, 00:21:57.322 "data_size": 65536 00:21:57.322 }, 00:21:57.322 { 00:21:57.322 "name": "BaseBdev4", 00:21:57.322 "uuid": "ac32b6e8-c260-57dd-a55f-d3a4351589af", 00:21:57.322 "is_configured": true, 00:21:57.322 "data_offset": 0, 00:21:57.322 "data_size": 65536 00:21:57.322 } 00:21:57.322 ] 00:21:57.322 }' 00:21:57.322 06:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:57.322 06:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:57.322 06:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:57.579 06:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:57.579 06:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:58.515 06:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:58.515 06:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:58.515 06:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:58.515 06:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:58.515 06:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:58.515 06:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:58.515 06:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.515 06:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.515 06:48:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.516 06:48:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.516 06:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.516 06:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:58.516 "name": "raid_bdev1", 00:21:58.516 "uuid": "fbb56596-b06a-4718-b736-7f806ee808a6", 00:21:58.516 "strip_size_kb": 64, 00:21:58.516 "state": "online", 00:21:58.516 "raid_level": "raid5f", 00:21:58.516 "superblock": false, 00:21:58.516 "num_base_bdevs": 4, 00:21:58.516 "num_base_bdevs_discovered": 4, 00:21:58.516 "num_base_bdevs_operational": 4, 00:21:58.516 "process": { 00:21:58.516 "type": "rebuild", 00:21:58.516 "target": "spare", 00:21:58.516 "progress": { 00:21:58.516 "blocks": 109440, 00:21:58.516 "percent": 55 00:21:58.516 } 00:21:58.516 }, 00:21:58.516 "base_bdevs_list": [ 00:21:58.516 { 00:21:58.516 "name": "spare", 00:21:58.516 "uuid": "f8350c74-4337-5f3b-a172-bc61f7b04d16", 00:21:58.516 "is_configured": true, 00:21:58.516 "data_offset": 0, 00:21:58.516 "data_size": 65536 00:21:58.516 }, 00:21:58.516 { 00:21:58.516 "name": "BaseBdev2", 00:21:58.516 "uuid": "1a318dd1-424f-5228-ba3f-b6aa7ea464b5", 00:21:58.516 "is_configured": true, 00:21:58.516 "data_offset": 0, 00:21:58.516 "data_size": 65536 00:21:58.516 }, 00:21:58.516 { 00:21:58.516 "name": "BaseBdev3", 00:21:58.516 "uuid": "12be4a60-ff97-597b-b89c-0d50bc558587", 00:21:58.516 "is_configured": true, 00:21:58.516 "data_offset": 0, 00:21:58.516 "data_size": 65536 00:21:58.516 }, 00:21:58.516 { 00:21:58.516 "name": "BaseBdev4", 00:21:58.516 "uuid": "ac32b6e8-c260-57dd-a55f-d3a4351589af", 00:21:58.516 "is_configured": true, 00:21:58.516 "data_offset": 0, 00:21:58.516 "data_size": 65536 00:21:58.516 } 00:21:58.516 ] 00:21:58.516 }' 00:21:58.516 06:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:58.516 06:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:58.516 06:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:58.516 06:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:58.516 06:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:59.890 06:48:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:59.890 06:48:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:59.890 06:48:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:59.890 06:48:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:59.890 06:48:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:59.890 06:48:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:59.890 06:48:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.890 06:48:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.890 06:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.890 06:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.890 06:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.890 06:48:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:59.890 "name": "raid_bdev1", 00:21:59.890 "uuid": "fbb56596-b06a-4718-b736-7f806ee808a6", 00:21:59.891 "strip_size_kb": 64, 00:21:59.891 "state": "online", 00:21:59.891 "raid_level": "raid5f", 00:21:59.891 "superblock": false, 00:21:59.891 "num_base_bdevs": 4, 00:21:59.891 "num_base_bdevs_discovered": 4, 00:21:59.891 "num_base_bdevs_operational": 4, 00:21:59.891 "process": { 00:21:59.891 "type": "rebuild", 00:21:59.891 "target": "spare", 00:21:59.891 "progress": { 00:21:59.891 "blocks": 132480, 00:21:59.891 "percent": 67 00:21:59.891 } 00:21:59.891 }, 00:21:59.891 "base_bdevs_list": [ 00:21:59.891 { 00:21:59.891 "name": "spare", 00:21:59.891 "uuid": "f8350c74-4337-5f3b-a172-bc61f7b04d16", 00:21:59.891 "is_configured": true, 00:21:59.891 "data_offset": 0, 00:21:59.891 "data_size": 65536 00:21:59.891 }, 00:21:59.891 { 00:21:59.891 "name": "BaseBdev2", 00:21:59.891 "uuid": "1a318dd1-424f-5228-ba3f-b6aa7ea464b5", 00:21:59.891 "is_configured": true, 00:21:59.891 "data_offset": 0, 00:21:59.891 "data_size": 65536 00:21:59.891 }, 00:21:59.891 { 00:21:59.891 "name": "BaseBdev3", 00:21:59.891 "uuid": "12be4a60-ff97-597b-b89c-0d50bc558587", 00:21:59.891 "is_configured": true, 00:21:59.891 "data_offset": 0, 00:21:59.891 "data_size": 65536 00:21:59.891 }, 00:21:59.891 { 00:21:59.891 "name": "BaseBdev4", 00:21:59.891 "uuid": "ac32b6e8-c260-57dd-a55f-d3a4351589af", 00:21:59.891 "is_configured": true, 00:21:59.891 "data_offset": 0, 00:21:59.891 "data_size": 65536 00:21:59.891 } 00:21:59.891 ] 00:21:59.891 }' 00:21:59.891 06:48:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:59.891 06:48:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:59.891 06:48:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:59.891 06:48:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:59.891 06:48:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:00.827 06:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:00.827 06:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:00.827 06:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:00.827 06:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:00.827 06:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:00.827 06:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:00.827 06:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.827 06:48:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.827 06:48:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.827 06:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.827 06:48:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.827 06:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:00.827 "name": "raid_bdev1", 00:22:00.827 "uuid": "fbb56596-b06a-4718-b736-7f806ee808a6", 00:22:00.827 "strip_size_kb": 64, 00:22:00.827 "state": "online", 00:22:00.827 "raid_level": "raid5f", 00:22:00.827 "superblock": false, 00:22:00.827 "num_base_bdevs": 4, 00:22:00.827 "num_base_bdevs_discovered": 4, 00:22:00.827 "num_base_bdevs_operational": 4, 00:22:00.827 "process": { 00:22:00.827 "type": "rebuild", 00:22:00.827 "target": "spare", 00:22:00.827 "progress": { 00:22:00.827 "blocks": 153600, 00:22:00.827 "percent": 78 00:22:00.827 } 00:22:00.827 }, 00:22:00.827 "base_bdevs_list": [ 00:22:00.827 { 00:22:00.827 "name": "spare", 00:22:00.827 "uuid": "f8350c74-4337-5f3b-a172-bc61f7b04d16", 00:22:00.827 "is_configured": true, 00:22:00.827 "data_offset": 0, 00:22:00.827 "data_size": 65536 00:22:00.827 }, 00:22:00.827 { 00:22:00.827 "name": "BaseBdev2", 00:22:00.827 "uuid": "1a318dd1-424f-5228-ba3f-b6aa7ea464b5", 00:22:00.827 "is_configured": true, 00:22:00.827 "data_offset": 0, 00:22:00.827 "data_size": 65536 00:22:00.827 }, 00:22:00.827 { 00:22:00.827 "name": "BaseBdev3", 00:22:00.827 "uuid": "12be4a60-ff97-597b-b89c-0d50bc558587", 00:22:00.827 "is_configured": true, 00:22:00.827 "data_offset": 0, 00:22:00.827 "data_size": 65536 00:22:00.827 }, 00:22:00.827 { 00:22:00.827 "name": "BaseBdev4", 00:22:00.827 "uuid": "ac32b6e8-c260-57dd-a55f-d3a4351589af", 00:22:00.827 "is_configured": true, 00:22:00.827 "data_offset": 0, 00:22:00.827 "data_size": 65536 00:22:00.827 } 00:22:00.827 ] 00:22:00.827 }' 00:22:00.827 06:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:00.827 06:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:00.827 06:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:00.827 06:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:00.827 06:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:01.809 06:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:01.809 06:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:01.809 06:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:01.809 06:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:01.809 06:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:01.809 06:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:01.809 06:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.809 06:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.809 06:48:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.809 06:48:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.809 06:48:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.068 06:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:02.068 "name": "raid_bdev1", 00:22:02.068 "uuid": "fbb56596-b06a-4718-b736-7f806ee808a6", 00:22:02.068 "strip_size_kb": 64, 00:22:02.068 "state": "online", 00:22:02.068 "raid_level": "raid5f", 00:22:02.068 "superblock": false, 00:22:02.068 "num_base_bdevs": 4, 00:22:02.068 "num_base_bdevs_discovered": 4, 00:22:02.068 "num_base_bdevs_operational": 4, 00:22:02.068 "process": { 00:22:02.068 "type": "rebuild", 00:22:02.068 "target": "spare", 00:22:02.068 "progress": { 00:22:02.068 "blocks": 176640, 00:22:02.068 "percent": 89 00:22:02.068 } 00:22:02.068 }, 00:22:02.068 "base_bdevs_list": [ 00:22:02.068 { 00:22:02.068 "name": "spare", 00:22:02.068 "uuid": "f8350c74-4337-5f3b-a172-bc61f7b04d16", 00:22:02.068 "is_configured": true, 00:22:02.068 "data_offset": 0, 00:22:02.068 "data_size": 65536 00:22:02.068 }, 00:22:02.068 { 00:22:02.068 "name": "BaseBdev2", 00:22:02.068 "uuid": "1a318dd1-424f-5228-ba3f-b6aa7ea464b5", 00:22:02.068 "is_configured": true, 00:22:02.068 "data_offset": 0, 00:22:02.068 "data_size": 65536 00:22:02.068 }, 00:22:02.068 { 00:22:02.068 "name": "BaseBdev3", 00:22:02.068 "uuid": "12be4a60-ff97-597b-b89c-0d50bc558587", 00:22:02.068 "is_configured": true, 00:22:02.068 "data_offset": 0, 00:22:02.068 "data_size": 65536 00:22:02.068 }, 00:22:02.068 { 00:22:02.068 "name": "BaseBdev4", 00:22:02.068 "uuid": "ac32b6e8-c260-57dd-a55f-d3a4351589af", 00:22:02.068 "is_configured": true, 00:22:02.068 "data_offset": 0, 00:22:02.068 "data_size": 65536 00:22:02.068 } 00:22:02.068 ] 00:22:02.068 }' 00:22:02.068 06:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:02.068 06:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:02.068 06:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:02.068 06:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:02.068 06:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:03.003 [2024-12-06 06:48:21.548946] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:03.003 [2024-12-06 06:48:21.549048] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:03.003 [2024-12-06 06:48:21.549113] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:03.003 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:03.003 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:03.003 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:03.003 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:03.003 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:03.003 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:03.003 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.003 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.003 06:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.003 06:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.003 06:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.263 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:03.263 "name": "raid_bdev1", 00:22:03.263 "uuid": "fbb56596-b06a-4718-b736-7f806ee808a6", 00:22:03.263 "strip_size_kb": 64, 00:22:03.263 "state": "online", 00:22:03.263 "raid_level": "raid5f", 00:22:03.263 "superblock": false, 00:22:03.263 "num_base_bdevs": 4, 00:22:03.263 "num_base_bdevs_discovered": 4, 00:22:03.263 "num_base_bdevs_operational": 4, 00:22:03.263 "base_bdevs_list": [ 00:22:03.263 { 00:22:03.263 "name": "spare", 00:22:03.263 "uuid": "f8350c74-4337-5f3b-a172-bc61f7b04d16", 00:22:03.263 "is_configured": true, 00:22:03.263 "data_offset": 0, 00:22:03.263 "data_size": 65536 00:22:03.263 }, 00:22:03.263 { 00:22:03.263 "name": "BaseBdev2", 00:22:03.263 "uuid": "1a318dd1-424f-5228-ba3f-b6aa7ea464b5", 00:22:03.263 "is_configured": true, 00:22:03.263 "data_offset": 0, 00:22:03.263 "data_size": 65536 00:22:03.263 }, 00:22:03.263 { 00:22:03.263 "name": "BaseBdev3", 00:22:03.263 "uuid": "12be4a60-ff97-597b-b89c-0d50bc558587", 00:22:03.263 "is_configured": true, 00:22:03.263 "data_offset": 0, 00:22:03.263 "data_size": 65536 00:22:03.263 }, 00:22:03.263 { 00:22:03.263 "name": "BaseBdev4", 00:22:03.263 "uuid": "ac32b6e8-c260-57dd-a55f-d3a4351589af", 00:22:03.263 "is_configured": true, 00:22:03.263 "data_offset": 0, 00:22:03.263 "data_size": 65536 00:22:03.263 } 00:22:03.263 ] 00:22:03.263 }' 00:22:03.263 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:03.263 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:03.263 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:03.263 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:03.263 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:22:03.263 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:03.263 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:03.263 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:03.263 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:03.263 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:03.263 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.263 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.263 06:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.263 06:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.263 06:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.263 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:03.263 "name": "raid_bdev1", 00:22:03.263 "uuid": "fbb56596-b06a-4718-b736-7f806ee808a6", 00:22:03.263 "strip_size_kb": 64, 00:22:03.263 "state": "online", 00:22:03.263 "raid_level": "raid5f", 00:22:03.263 "superblock": false, 00:22:03.263 "num_base_bdevs": 4, 00:22:03.263 "num_base_bdevs_discovered": 4, 00:22:03.263 "num_base_bdevs_operational": 4, 00:22:03.263 "base_bdevs_list": [ 00:22:03.263 { 00:22:03.263 "name": "spare", 00:22:03.263 "uuid": "f8350c74-4337-5f3b-a172-bc61f7b04d16", 00:22:03.263 "is_configured": true, 00:22:03.263 "data_offset": 0, 00:22:03.263 "data_size": 65536 00:22:03.263 }, 00:22:03.263 { 00:22:03.263 "name": "BaseBdev2", 00:22:03.263 "uuid": "1a318dd1-424f-5228-ba3f-b6aa7ea464b5", 00:22:03.263 "is_configured": true, 00:22:03.263 "data_offset": 0, 00:22:03.263 "data_size": 65536 00:22:03.263 }, 00:22:03.263 { 00:22:03.263 "name": "BaseBdev3", 00:22:03.263 "uuid": "12be4a60-ff97-597b-b89c-0d50bc558587", 00:22:03.263 "is_configured": true, 00:22:03.263 "data_offset": 0, 00:22:03.263 "data_size": 65536 00:22:03.263 }, 00:22:03.263 { 00:22:03.263 "name": "BaseBdev4", 00:22:03.263 "uuid": "ac32b6e8-c260-57dd-a55f-d3a4351589af", 00:22:03.263 "is_configured": true, 00:22:03.263 "data_offset": 0, 00:22:03.263 "data_size": 65536 00:22:03.263 } 00:22:03.263 ] 00:22:03.263 }' 00:22:03.263 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:03.263 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:03.263 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:03.522 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:03.522 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:03.522 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:03.522 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:03.522 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:03.522 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:03.522 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:03.522 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:03.522 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:03.522 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:03.522 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:03.522 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.522 06:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.522 06:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.522 06:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.522 06:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.522 06:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:03.522 "name": "raid_bdev1", 00:22:03.522 "uuid": "fbb56596-b06a-4718-b736-7f806ee808a6", 00:22:03.522 "strip_size_kb": 64, 00:22:03.522 "state": "online", 00:22:03.522 "raid_level": "raid5f", 00:22:03.522 "superblock": false, 00:22:03.522 "num_base_bdevs": 4, 00:22:03.522 "num_base_bdevs_discovered": 4, 00:22:03.522 "num_base_bdevs_operational": 4, 00:22:03.522 "base_bdevs_list": [ 00:22:03.522 { 00:22:03.522 "name": "spare", 00:22:03.522 "uuid": "f8350c74-4337-5f3b-a172-bc61f7b04d16", 00:22:03.522 "is_configured": true, 00:22:03.522 "data_offset": 0, 00:22:03.522 "data_size": 65536 00:22:03.522 }, 00:22:03.522 { 00:22:03.522 "name": "BaseBdev2", 00:22:03.522 "uuid": "1a318dd1-424f-5228-ba3f-b6aa7ea464b5", 00:22:03.522 "is_configured": true, 00:22:03.522 "data_offset": 0, 00:22:03.522 "data_size": 65536 00:22:03.522 }, 00:22:03.522 { 00:22:03.522 "name": "BaseBdev3", 00:22:03.522 "uuid": "12be4a60-ff97-597b-b89c-0d50bc558587", 00:22:03.522 "is_configured": true, 00:22:03.522 "data_offset": 0, 00:22:03.522 "data_size": 65536 00:22:03.522 }, 00:22:03.522 { 00:22:03.522 "name": "BaseBdev4", 00:22:03.522 "uuid": "ac32b6e8-c260-57dd-a55f-d3a4351589af", 00:22:03.522 "is_configured": true, 00:22:03.522 "data_offset": 0, 00:22:03.522 "data_size": 65536 00:22:03.522 } 00:22:03.522 ] 00:22:03.522 }' 00:22:03.522 06:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:03.522 06:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.089 06:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:04.089 06:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.089 06:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.089 [2024-12-06 06:48:22.472661] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:04.089 [2024-12-06 06:48:22.472710] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:04.089 [2024-12-06 06:48:22.472812] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:04.089 [2024-12-06 06:48:22.472938] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:04.089 [2024-12-06 06:48:22.472956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:04.089 06:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.089 06:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.089 06:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.089 06:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.089 06:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:22:04.089 06:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.089 06:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:04.089 06:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:04.089 06:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:04.089 06:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:04.089 06:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:04.089 06:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:04.089 06:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:04.089 06:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:04.089 06:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:04.089 06:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:22:04.089 06:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:04.089 06:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:04.089 06:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:04.348 /dev/nbd0 00:22:04.348 06:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:04.348 06:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:04.348 06:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:04.348 06:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:22:04.348 06:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:04.348 06:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:04.348 06:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:04.348 06:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:22:04.348 06:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:04.348 06:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:04.348 06:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:04.348 1+0 records in 00:22:04.348 1+0 records out 00:22:04.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395291 s, 10.4 MB/s 00:22:04.348 06:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:04.348 06:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:22:04.348 06:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:04.348 06:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:04.348 06:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:22:04.348 06:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:04.348 06:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:04.348 06:48:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:04.607 /dev/nbd1 00:22:04.607 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:04.607 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:04.607 06:48:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:04.607 06:48:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:22:04.607 06:48:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:04.607 06:48:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:04.607 06:48:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:04.607 06:48:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:22:04.607 06:48:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:04.607 06:48:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:04.607 06:48:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:04.607 1+0 records in 00:22:04.607 1+0 records out 00:22:04.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480145 s, 8.5 MB/s 00:22:04.866 06:48:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:04.866 06:48:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:22:04.866 06:48:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:04.866 06:48:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:04.866 06:48:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:22:04.866 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:04.866 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:04.866 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:04.866 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:04.866 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:04.866 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:04.866 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:04.866 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:22:04.866 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:04.866 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:05.124 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:05.124 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:05.124 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:05.124 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:05.124 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:05.124 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:05.124 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:05.124 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:05.124 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:05.124 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:05.381 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:05.381 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:05.381 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:05.381 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:05.381 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:05.381 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:05.381 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:05.381 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:05.381 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:22:05.381 06:48:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85195 00:22:05.381 06:48:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 85195 ']' 00:22:05.381 06:48:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 85195 00:22:05.381 06:48:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:22:05.381 06:48:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:05.381 06:48:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85195 00:22:05.639 killing process with pid 85195 00:22:05.639 Received shutdown signal, test time was about 60.000000 seconds 00:22:05.639 00:22:05.639 Latency(us) 00:22:05.639 [2024-12-06T06:48:24.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.639 [2024-12-06T06:48:24.286Z] =================================================================================================================== 00:22:05.639 [2024-12-06T06:48:24.286Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:05.639 06:48:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:05.639 06:48:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:05.639 06:48:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85195' 00:22:05.639 06:48:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 85195 00:22:05.639 [2024-12-06 06:48:24.030449] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:05.639 06:48:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 85195 00:22:05.897 [2024-12-06 06:48:24.488104] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:22:07.269 00:22:07.269 real 0m20.162s 00:22:07.269 user 0m25.129s 00:22:07.269 sys 0m2.301s 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:07.269 ************************************ 00:22:07.269 END TEST raid5f_rebuild_test 00:22:07.269 ************************************ 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.269 06:48:25 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:22:07.269 06:48:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:07.269 06:48:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:07.269 06:48:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:07.269 ************************************ 00:22:07.269 START TEST raid5f_rebuild_test_sb 00:22:07.269 ************************************ 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85704 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:07.269 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85704 00:22:07.270 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85704 ']' 00:22:07.270 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.270 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:07.270 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.270 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:07.270 06:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.270 [2024-12-06 06:48:25.696551] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:22:07.270 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:07.270 Zero copy mechanism will not be used. 00:22:07.270 [2024-12-06 06:48:25.697018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85704 ] 00:22:07.270 [2024-12-06 06:48:25.884875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.527 [2024-12-06 06:48:26.034742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.785 [2024-12-06 06:48:26.243062] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:07.785 [2024-12-06 06:48:26.243140] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:08.350 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:08.350 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:22:08.350 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:08.350 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:08.350 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.350 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.350 BaseBdev1_malloc 00:22:08.350 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.350 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:08.350 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.350 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.350 [2024-12-06 06:48:26.809505] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:08.350 [2024-12-06 06:48:26.809594] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.351 [2024-12-06 06:48:26.809627] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:08.351 [2024-12-06 06:48:26.809647] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.351 [2024-12-06 06:48:26.812469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.351 [2024-12-06 06:48:26.812670] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:08.351 BaseBdev1 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.351 BaseBdev2_malloc 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.351 [2024-12-06 06:48:26.861989] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:08.351 [2024-12-06 06:48:26.862067] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.351 [2024-12-06 06:48:26.862101] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:08.351 [2024-12-06 06:48:26.862120] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.351 [2024-12-06 06:48:26.864888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.351 [2024-12-06 06:48:26.864938] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:08.351 BaseBdev2 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.351 BaseBdev3_malloc 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.351 [2024-12-06 06:48:26.929642] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:08.351 [2024-12-06 06:48:26.929716] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.351 [2024-12-06 06:48:26.929750] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:08.351 [2024-12-06 06:48:26.929775] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.351 [2024-12-06 06:48:26.932499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.351 [2024-12-06 06:48:26.932569] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:08.351 BaseBdev3 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.351 BaseBdev4_malloc 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.351 [2024-12-06 06:48:26.982205] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:08.351 [2024-12-06 06:48:26.982294] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.351 [2024-12-06 06:48:26.982323] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:08.351 [2024-12-06 06:48:26.982341] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.351 [2024-12-06 06:48:26.985186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.351 [2024-12-06 06:48:26.985251] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:08.351 BaseBdev4 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.351 06:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.610 spare_malloc 00:22:08.610 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.610 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:08.610 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.610 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.610 spare_delay 00:22:08.610 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.610 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:08.610 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.610 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.610 [2024-12-06 06:48:27.046408] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:08.610 [2024-12-06 06:48:27.046488] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.610 [2024-12-06 06:48:27.046515] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:08.610 [2024-12-06 06:48:27.046548] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.610 [2024-12-06 06:48:27.049286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.610 [2024-12-06 06:48:27.049485] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:08.610 spare 00:22:08.610 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.610 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:22:08.610 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.610 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.610 [2024-12-06 06:48:27.058468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:08.610 [2024-12-06 06:48:27.061054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:08.610 [2024-12-06 06:48:27.061153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:08.610 [2024-12-06 06:48:27.061234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:08.610 [2024-12-06 06:48:27.061510] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:08.610 [2024-12-06 06:48:27.061533] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:08.610 [2024-12-06 06:48:27.061886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:08.610 [2024-12-06 06:48:27.069468] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:08.610 [2024-12-06 06:48:27.069498] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:08.610 [2024-12-06 06:48:27.069756] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:08.610 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.610 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:08.610 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:08.610 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:08.610 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:08.610 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:08.610 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:08.610 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:08.611 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:08.611 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:08.611 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:08.611 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.611 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.611 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.611 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.611 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.611 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:08.611 "name": "raid_bdev1", 00:22:08.611 "uuid": "75c6750b-1802-44d3-a6e9-042100a593b6", 00:22:08.611 "strip_size_kb": 64, 00:22:08.611 "state": "online", 00:22:08.611 "raid_level": "raid5f", 00:22:08.611 "superblock": true, 00:22:08.611 "num_base_bdevs": 4, 00:22:08.611 "num_base_bdevs_discovered": 4, 00:22:08.611 "num_base_bdevs_operational": 4, 00:22:08.611 "base_bdevs_list": [ 00:22:08.611 { 00:22:08.611 "name": "BaseBdev1", 00:22:08.611 "uuid": "f13e5abd-abe9-53c8-8fe4-62a7f8af8154", 00:22:08.611 "is_configured": true, 00:22:08.611 "data_offset": 2048, 00:22:08.611 "data_size": 63488 00:22:08.611 }, 00:22:08.611 { 00:22:08.611 "name": "BaseBdev2", 00:22:08.611 "uuid": "a9f20d42-768b-5d67-b38f-90594f2ffd9f", 00:22:08.611 "is_configured": true, 00:22:08.611 "data_offset": 2048, 00:22:08.611 "data_size": 63488 00:22:08.611 }, 00:22:08.611 { 00:22:08.611 "name": "BaseBdev3", 00:22:08.611 "uuid": "9f8884a8-3f66-5075-827d-ece8615f9658", 00:22:08.611 "is_configured": true, 00:22:08.611 "data_offset": 2048, 00:22:08.611 "data_size": 63488 00:22:08.611 }, 00:22:08.611 { 00:22:08.611 "name": "BaseBdev4", 00:22:08.611 "uuid": "b6486a0c-eaa1-5b4f-b4b9-f7da86a357a6", 00:22:08.611 "is_configured": true, 00:22:08.611 "data_offset": 2048, 00:22:08.611 "data_size": 63488 00:22:08.611 } 00:22:08.611 ] 00:22:08.611 }' 00:22:08.611 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:08.611 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.198 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:09.198 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:09.198 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.198 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.198 [2024-12-06 06:48:27.589640] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:09.198 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.198 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:22:09.198 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.198 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:09.198 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.198 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.198 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.198 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:22:09.198 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:09.198 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:22:09.198 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:22:09.198 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:22:09.198 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:09.198 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:09.198 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:09.198 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:09.198 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:09.198 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:22:09.198 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:09.198 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:09.198 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:09.457 [2024-12-06 06:48:27.941546] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:22:09.457 /dev/nbd0 00:22:09.457 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:09.457 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:09.457 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:09.457 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:22:09.457 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:09.457 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:09.457 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:09.457 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:22:09.457 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:09.457 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:09.457 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:09.457 1+0 records in 00:22:09.457 1+0 records out 00:22:09.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326494 s, 12.5 MB/s 00:22:09.457 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:09.457 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:22:09.457 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:09.457 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:09.457 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:22:09.457 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:09.457 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:09.457 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:22:09.457 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:22:09.457 06:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:22:09.457 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:22:10.024 496+0 records in 00:22:10.024 496+0 records out 00:22:10.024 97517568 bytes (98 MB, 93 MiB) copied, 0.601496 s, 162 MB/s 00:22:10.024 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:10.024 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:10.024 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:10.024 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:10.024 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:22:10.024 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:10.024 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:10.592 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:10.592 [2024-12-06 06:48:28.942056] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:10.592 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:10.592 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:10.592 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:10.592 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:10.592 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:10.592 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:10.592 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:10.592 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:10.592 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.592 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.592 [2024-12-06 06:48:28.953734] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:10.592 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.592 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:10.592 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:10.592 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:10.592 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:10.592 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:10.592 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:10.592 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:10.592 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:10.592 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:10.592 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:10.592 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.592 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.592 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.592 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.592 06:48:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.592 06:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:10.593 "name": "raid_bdev1", 00:22:10.593 "uuid": "75c6750b-1802-44d3-a6e9-042100a593b6", 00:22:10.593 "strip_size_kb": 64, 00:22:10.593 "state": "online", 00:22:10.593 "raid_level": "raid5f", 00:22:10.593 "superblock": true, 00:22:10.593 "num_base_bdevs": 4, 00:22:10.593 "num_base_bdevs_discovered": 3, 00:22:10.593 "num_base_bdevs_operational": 3, 00:22:10.593 "base_bdevs_list": [ 00:22:10.593 { 00:22:10.593 "name": null, 00:22:10.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.593 "is_configured": false, 00:22:10.593 "data_offset": 0, 00:22:10.593 "data_size": 63488 00:22:10.593 }, 00:22:10.593 { 00:22:10.593 "name": "BaseBdev2", 00:22:10.593 "uuid": "a9f20d42-768b-5d67-b38f-90594f2ffd9f", 00:22:10.593 "is_configured": true, 00:22:10.593 "data_offset": 2048, 00:22:10.593 "data_size": 63488 00:22:10.593 }, 00:22:10.593 { 00:22:10.593 "name": "BaseBdev3", 00:22:10.593 "uuid": "9f8884a8-3f66-5075-827d-ece8615f9658", 00:22:10.593 "is_configured": true, 00:22:10.593 "data_offset": 2048, 00:22:10.593 "data_size": 63488 00:22:10.593 }, 00:22:10.593 { 00:22:10.593 "name": "BaseBdev4", 00:22:10.593 "uuid": "b6486a0c-eaa1-5b4f-b4b9-f7da86a357a6", 00:22:10.593 "is_configured": true, 00:22:10.593 "data_offset": 2048, 00:22:10.593 "data_size": 63488 00:22:10.593 } 00:22:10.593 ] 00:22:10.593 }' 00:22:10.593 06:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:10.593 06:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.851 06:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:10.851 06:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.851 06:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.851 [2024-12-06 06:48:29.445890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:10.851 [2024-12-06 06:48:29.460611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:22:10.851 06:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.851 06:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:10.851 [2024-12-06 06:48:29.471804] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:12.227 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:12.227 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:12.227 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:12.227 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:12.227 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:12.227 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.227 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.227 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.227 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.227 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.227 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:12.227 "name": "raid_bdev1", 00:22:12.227 "uuid": "75c6750b-1802-44d3-a6e9-042100a593b6", 00:22:12.227 "strip_size_kb": 64, 00:22:12.227 "state": "online", 00:22:12.227 "raid_level": "raid5f", 00:22:12.227 "superblock": true, 00:22:12.227 "num_base_bdevs": 4, 00:22:12.227 "num_base_bdevs_discovered": 4, 00:22:12.227 "num_base_bdevs_operational": 4, 00:22:12.227 "process": { 00:22:12.227 "type": "rebuild", 00:22:12.227 "target": "spare", 00:22:12.227 "progress": { 00:22:12.227 "blocks": 17280, 00:22:12.227 "percent": 9 00:22:12.227 } 00:22:12.227 }, 00:22:12.227 "base_bdevs_list": [ 00:22:12.227 { 00:22:12.227 "name": "spare", 00:22:12.227 "uuid": "29bd81bb-213f-5f09-b759-0a266bca1841", 00:22:12.227 "is_configured": true, 00:22:12.227 "data_offset": 2048, 00:22:12.227 "data_size": 63488 00:22:12.227 }, 00:22:12.227 { 00:22:12.227 "name": "BaseBdev2", 00:22:12.227 "uuid": "a9f20d42-768b-5d67-b38f-90594f2ffd9f", 00:22:12.227 "is_configured": true, 00:22:12.227 "data_offset": 2048, 00:22:12.227 "data_size": 63488 00:22:12.227 }, 00:22:12.227 { 00:22:12.227 "name": "BaseBdev3", 00:22:12.227 "uuid": "9f8884a8-3f66-5075-827d-ece8615f9658", 00:22:12.227 "is_configured": true, 00:22:12.227 "data_offset": 2048, 00:22:12.227 "data_size": 63488 00:22:12.227 }, 00:22:12.227 { 00:22:12.227 "name": "BaseBdev4", 00:22:12.227 "uuid": "b6486a0c-eaa1-5b4f-b4b9-f7da86a357a6", 00:22:12.227 "is_configured": true, 00:22:12.227 "data_offset": 2048, 00:22:12.227 "data_size": 63488 00:22:12.227 } 00:22:12.227 ] 00:22:12.227 }' 00:22:12.227 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:12.227 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:12.227 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:12.227 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:12.227 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:12.227 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.227 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.227 [2024-12-06 06:48:30.634139] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:12.227 [2024-12-06 06:48:30.687450] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:12.227 [2024-12-06 06:48:30.687615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:12.227 [2024-12-06 06:48:30.687659] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:12.227 [2024-12-06 06:48:30.687675] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:12.227 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.227 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:12.228 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:12.228 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:12.228 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:12.228 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:12.228 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:12.228 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:12.228 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:12.228 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:12.228 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:12.228 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.228 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.228 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.228 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.228 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.228 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:12.228 "name": "raid_bdev1", 00:22:12.228 "uuid": "75c6750b-1802-44d3-a6e9-042100a593b6", 00:22:12.228 "strip_size_kb": 64, 00:22:12.228 "state": "online", 00:22:12.228 "raid_level": "raid5f", 00:22:12.228 "superblock": true, 00:22:12.228 "num_base_bdevs": 4, 00:22:12.228 "num_base_bdevs_discovered": 3, 00:22:12.228 "num_base_bdevs_operational": 3, 00:22:12.228 "base_bdevs_list": [ 00:22:12.228 { 00:22:12.228 "name": null, 00:22:12.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.228 "is_configured": false, 00:22:12.228 "data_offset": 0, 00:22:12.228 "data_size": 63488 00:22:12.228 }, 00:22:12.228 { 00:22:12.228 "name": "BaseBdev2", 00:22:12.228 "uuid": "a9f20d42-768b-5d67-b38f-90594f2ffd9f", 00:22:12.228 "is_configured": true, 00:22:12.228 "data_offset": 2048, 00:22:12.228 "data_size": 63488 00:22:12.228 }, 00:22:12.228 { 00:22:12.228 "name": "BaseBdev3", 00:22:12.228 "uuid": "9f8884a8-3f66-5075-827d-ece8615f9658", 00:22:12.228 "is_configured": true, 00:22:12.228 "data_offset": 2048, 00:22:12.228 "data_size": 63488 00:22:12.228 }, 00:22:12.228 { 00:22:12.228 "name": "BaseBdev4", 00:22:12.228 "uuid": "b6486a0c-eaa1-5b4f-b4b9-f7da86a357a6", 00:22:12.228 "is_configured": true, 00:22:12.228 "data_offset": 2048, 00:22:12.228 "data_size": 63488 00:22:12.228 } 00:22:12.228 ] 00:22:12.228 }' 00:22:12.228 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:12.228 06:48:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.795 06:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:12.795 06:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:12.795 06:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:12.795 06:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:12.796 06:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:12.796 06:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.796 06:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.796 06:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.796 06:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.796 06:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.796 06:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:12.796 "name": "raid_bdev1", 00:22:12.796 "uuid": "75c6750b-1802-44d3-a6e9-042100a593b6", 00:22:12.796 "strip_size_kb": 64, 00:22:12.796 "state": "online", 00:22:12.796 "raid_level": "raid5f", 00:22:12.796 "superblock": true, 00:22:12.796 "num_base_bdevs": 4, 00:22:12.796 "num_base_bdevs_discovered": 3, 00:22:12.796 "num_base_bdevs_operational": 3, 00:22:12.796 "base_bdevs_list": [ 00:22:12.796 { 00:22:12.796 "name": null, 00:22:12.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.796 "is_configured": false, 00:22:12.796 "data_offset": 0, 00:22:12.796 "data_size": 63488 00:22:12.796 }, 00:22:12.796 { 00:22:12.796 "name": "BaseBdev2", 00:22:12.796 "uuid": "a9f20d42-768b-5d67-b38f-90594f2ffd9f", 00:22:12.796 "is_configured": true, 00:22:12.796 "data_offset": 2048, 00:22:12.796 "data_size": 63488 00:22:12.796 }, 00:22:12.796 { 00:22:12.796 "name": "BaseBdev3", 00:22:12.796 "uuid": "9f8884a8-3f66-5075-827d-ece8615f9658", 00:22:12.796 "is_configured": true, 00:22:12.796 "data_offset": 2048, 00:22:12.796 "data_size": 63488 00:22:12.796 }, 00:22:12.796 { 00:22:12.796 "name": "BaseBdev4", 00:22:12.796 "uuid": "b6486a0c-eaa1-5b4f-b4b9-f7da86a357a6", 00:22:12.796 "is_configured": true, 00:22:12.796 "data_offset": 2048, 00:22:12.796 "data_size": 63488 00:22:12.796 } 00:22:12.796 ] 00:22:12.796 }' 00:22:12.796 06:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:12.796 06:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:12.796 06:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:12.796 06:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:12.796 06:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:12.796 06:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.796 06:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.796 [2024-12-06 06:48:31.403408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:12.796 [2024-12-06 06:48:31.417068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:22:12.796 06:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.796 06:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:12.796 [2024-12-06 06:48:31.425895] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:14.169 "name": "raid_bdev1", 00:22:14.169 "uuid": "75c6750b-1802-44d3-a6e9-042100a593b6", 00:22:14.169 "strip_size_kb": 64, 00:22:14.169 "state": "online", 00:22:14.169 "raid_level": "raid5f", 00:22:14.169 "superblock": true, 00:22:14.169 "num_base_bdevs": 4, 00:22:14.169 "num_base_bdevs_discovered": 4, 00:22:14.169 "num_base_bdevs_operational": 4, 00:22:14.169 "process": { 00:22:14.169 "type": "rebuild", 00:22:14.169 "target": "spare", 00:22:14.169 "progress": { 00:22:14.169 "blocks": 17280, 00:22:14.169 "percent": 9 00:22:14.169 } 00:22:14.169 }, 00:22:14.169 "base_bdevs_list": [ 00:22:14.169 { 00:22:14.169 "name": "spare", 00:22:14.169 "uuid": "29bd81bb-213f-5f09-b759-0a266bca1841", 00:22:14.169 "is_configured": true, 00:22:14.169 "data_offset": 2048, 00:22:14.169 "data_size": 63488 00:22:14.169 }, 00:22:14.169 { 00:22:14.169 "name": "BaseBdev2", 00:22:14.169 "uuid": "a9f20d42-768b-5d67-b38f-90594f2ffd9f", 00:22:14.169 "is_configured": true, 00:22:14.169 "data_offset": 2048, 00:22:14.169 "data_size": 63488 00:22:14.169 }, 00:22:14.169 { 00:22:14.169 "name": "BaseBdev3", 00:22:14.169 "uuid": "9f8884a8-3f66-5075-827d-ece8615f9658", 00:22:14.169 "is_configured": true, 00:22:14.169 "data_offset": 2048, 00:22:14.169 "data_size": 63488 00:22:14.169 }, 00:22:14.169 { 00:22:14.169 "name": "BaseBdev4", 00:22:14.169 "uuid": "b6486a0c-eaa1-5b4f-b4b9-f7da86a357a6", 00:22:14.169 "is_configured": true, 00:22:14.169 "data_offset": 2048, 00:22:14.169 "data_size": 63488 00:22:14.169 } 00:22:14.169 ] 00:22:14.169 }' 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:22:14.169 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=692 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.169 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:14.170 "name": "raid_bdev1", 00:22:14.170 "uuid": "75c6750b-1802-44d3-a6e9-042100a593b6", 00:22:14.170 "strip_size_kb": 64, 00:22:14.170 "state": "online", 00:22:14.170 "raid_level": "raid5f", 00:22:14.170 "superblock": true, 00:22:14.170 "num_base_bdevs": 4, 00:22:14.170 "num_base_bdevs_discovered": 4, 00:22:14.170 "num_base_bdevs_operational": 4, 00:22:14.170 "process": { 00:22:14.170 "type": "rebuild", 00:22:14.170 "target": "spare", 00:22:14.170 "progress": { 00:22:14.170 "blocks": 21120, 00:22:14.170 "percent": 11 00:22:14.170 } 00:22:14.170 }, 00:22:14.170 "base_bdevs_list": [ 00:22:14.170 { 00:22:14.170 "name": "spare", 00:22:14.170 "uuid": "29bd81bb-213f-5f09-b759-0a266bca1841", 00:22:14.170 "is_configured": true, 00:22:14.170 "data_offset": 2048, 00:22:14.170 "data_size": 63488 00:22:14.170 }, 00:22:14.170 { 00:22:14.170 "name": "BaseBdev2", 00:22:14.170 "uuid": "a9f20d42-768b-5d67-b38f-90594f2ffd9f", 00:22:14.170 "is_configured": true, 00:22:14.170 "data_offset": 2048, 00:22:14.170 "data_size": 63488 00:22:14.170 }, 00:22:14.170 { 00:22:14.170 "name": "BaseBdev3", 00:22:14.170 "uuid": "9f8884a8-3f66-5075-827d-ece8615f9658", 00:22:14.170 "is_configured": true, 00:22:14.170 "data_offset": 2048, 00:22:14.170 "data_size": 63488 00:22:14.170 }, 00:22:14.170 { 00:22:14.170 "name": "BaseBdev4", 00:22:14.170 "uuid": "b6486a0c-eaa1-5b4f-b4b9-f7da86a357a6", 00:22:14.170 "is_configured": true, 00:22:14.170 "data_offset": 2048, 00:22:14.170 "data_size": 63488 00:22:14.170 } 00:22:14.170 ] 00:22:14.170 }' 00:22:14.170 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:14.170 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:14.170 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:14.170 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:14.170 06:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:15.109 06:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:15.109 06:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:15.109 06:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:15.109 06:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:15.109 06:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:15.109 06:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:15.109 06:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.109 06:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.109 06:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.109 06:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:15.109 06:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.367 06:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:15.367 "name": "raid_bdev1", 00:22:15.367 "uuid": "75c6750b-1802-44d3-a6e9-042100a593b6", 00:22:15.367 "strip_size_kb": 64, 00:22:15.367 "state": "online", 00:22:15.367 "raid_level": "raid5f", 00:22:15.367 "superblock": true, 00:22:15.367 "num_base_bdevs": 4, 00:22:15.367 "num_base_bdevs_discovered": 4, 00:22:15.367 "num_base_bdevs_operational": 4, 00:22:15.367 "process": { 00:22:15.367 "type": "rebuild", 00:22:15.367 "target": "spare", 00:22:15.367 "progress": { 00:22:15.367 "blocks": 42240, 00:22:15.367 "percent": 22 00:22:15.367 } 00:22:15.367 }, 00:22:15.367 "base_bdevs_list": [ 00:22:15.367 { 00:22:15.367 "name": "spare", 00:22:15.367 "uuid": "29bd81bb-213f-5f09-b759-0a266bca1841", 00:22:15.367 "is_configured": true, 00:22:15.367 "data_offset": 2048, 00:22:15.367 "data_size": 63488 00:22:15.367 }, 00:22:15.367 { 00:22:15.367 "name": "BaseBdev2", 00:22:15.367 "uuid": "a9f20d42-768b-5d67-b38f-90594f2ffd9f", 00:22:15.367 "is_configured": true, 00:22:15.367 "data_offset": 2048, 00:22:15.367 "data_size": 63488 00:22:15.367 }, 00:22:15.367 { 00:22:15.367 "name": "BaseBdev3", 00:22:15.367 "uuid": "9f8884a8-3f66-5075-827d-ece8615f9658", 00:22:15.367 "is_configured": true, 00:22:15.367 "data_offset": 2048, 00:22:15.367 "data_size": 63488 00:22:15.367 }, 00:22:15.367 { 00:22:15.367 "name": "BaseBdev4", 00:22:15.367 "uuid": "b6486a0c-eaa1-5b4f-b4b9-f7da86a357a6", 00:22:15.367 "is_configured": true, 00:22:15.367 "data_offset": 2048, 00:22:15.367 "data_size": 63488 00:22:15.367 } 00:22:15.367 ] 00:22:15.367 }' 00:22:15.367 06:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:15.367 06:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:15.367 06:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:15.367 06:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:15.367 06:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:16.302 06:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:16.302 06:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:16.302 06:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:16.302 06:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:16.302 06:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:16.302 06:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:16.302 06:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.302 06:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.302 06:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:16.302 06:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.302 06:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.302 06:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:16.302 "name": "raid_bdev1", 00:22:16.302 "uuid": "75c6750b-1802-44d3-a6e9-042100a593b6", 00:22:16.302 "strip_size_kb": 64, 00:22:16.302 "state": "online", 00:22:16.302 "raid_level": "raid5f", 00:22:16.302 "superblock": true, 00:22:16.302 "num_base_bdevs": 4, 00:22:16.302 "num_base_bdevs_discovered": 4, 00:22:16.302 "num_base_bdevs_operational": 4, 00:22:16.302 "process": { 00:22:16.302 "type": "rebuild", 00:22:16.302 "target": "spare", 00:22:16.302 "progress": { 00:22:16.302 "blocks": 65280, 00:22:16.302 "percent": 34 00:22:16.302 } 00:22:16.302 }, 00:22:16.302 "base_bdevs_list": [ 00:22:16.302 { 00:22:16.302 "name": "spare", 00:22:16.302 "uuid": "29bd81bb-213f-5f09-b759-0a266bca1841", 00:22:16.302 "is_configured": true, 00:22:16.302 "data_offset": 2048, 00:22:16.302 "data_size": 63488 00:22:16.302 }, 00:22:16.302 { 00:22:16.302 "name": "BaseBdev2", 00:22:16.302 "uuid": "a9f20d42-768b-5d67-b38f-90594f2ffd9f", 00:22:16.302 "is_configured": true, 00:22:16.302 "data_offset": 2048, 00:22:16.302 "data_size": 63488 00:22:16.302 }, 00:22:16.302 { 00:22:16.302 "name": "BaseBdev3", 00:22:16.302 "uuid": "9f8884a8-3f66-5075-827d-ece8615f9658", 00:22:16.302 "is_configured": true, 00:22:16.302 "data_offset": 2048, 00:22:16.302 "data_size": 63488 00:22:16.302 }, 00:22:16.302 { 00:22:16.302 "name": "BaseBdev4", 00:22:16.302 "uuid": "b6486a0c-eaa1-5b4f-b4b9-f7da86a357a6", 00:22:16.302 "is_configured": true, 00:22:16.302 "data_offset": 2048, 00:22:16.302 "data_size": 63488 00:22:16.302 } 00:22:16.302 ] 00:22:16.302 }' 00:22:16.302 06:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:16.561 06:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:16.561 06:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:16.561 06:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:16.561 06:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:17.495 06:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:17.495 06:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:17.495 06:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:17.495 06:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:17.495 06:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:17.495 06:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:17.495 06:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.495 06:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.495 06:48:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.495 06:48:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.495 06:48:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.495 06:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:17.495 "name": "raid_bdev1", 00:22:17.495 "uuid": "75c6750b-1802-44d3-a6e9-042100a593b6", 00:22:17.495 "strip_size_kb": 64, 00:22:17.495 "state": "online", 00:22:17.495 "raid_level": "raid5f", 00:22:17.495 "superblock": true, 00:22:17.495 "num_base_bdevs": 4, 00:22:17.495 "num_base_bdevs_discovered": 4, 00:22:17.495 "num_base_bdevs_operational": 4, 00:22:17.495 "process": { 00:22:17.495 "type": "rebuild", 00:22:17.495 "target": "spare", 00:22:17.495 "progress": { 00:22:17.495 "blocks": 88320, 00:22:17.495 "percent": 46 00:22:17.495 } 00:22:17.495 }, 00:22:17.495 "base_bdevs_list": [ 00:22:17.495 { 00:22:17.495 "name": "spare", 00:22:17.495 "uuid": "29bd81bb-213f-5f09-b759-0a266bca1841", 00:22:17.495 "is_configured": true, 00:22:17.495 "data_offset": 2048, 00:22:17.495 "data_size": 63488 00:22:17.495 }, 00:22:17.495 { 00:22:17.495 "name": "BaseBdev2", 00:22:17.495 "uuid": "a9f20d42-768b-5d67-b38f-90594f2ffd9f", 00:22:17.495 "is_configured": true, 00:22:17.495 "data_offset": 2048, 00:22:17.495 "data_size": 63488 00:22:17.495 }, 00:22:17.495 { 00:22:17.495 "name": "BaseBdev3", 00:22:17.495 "uuid": "9f8884a8-3f66-5075-827d-ece8615f9658", 00:22:17.495 "is_configured": true, 00:22:17.495 "data_offset": 2048, 00:22:17.495 "data_size": 63488 00:22:17.495 }, 00:22:17.495 { 00:22:17.495 "name": "BaseBdev4", 00:22:17.495 "uuid": "b6486a0c-eaa1-5b4f-b4b9-f7da86a357a6", 00:22:17.495 "is_configured": true, 00:22:17.495 "data_offset": 2048, 00:22:17.495 "data_size": 63488 00:22:17.495 } 00:22:17.495 ] 00:22:17.495 }' 00:22:17.495 06:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:17.753 06:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:17.753 06:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:17.753 06:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:17.753 06:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:18.687 06:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:18.687 06:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:18.687 06:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:18.687 06:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:18.687 06:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:18.687 06:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:18.687 06:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.687 06:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.687 06:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.687 06:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.687 06:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.687 06:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:18.687 "name": "raid_bdev1", 00:22:18.687 "uuid": "75c6750b-1802-44d3-a6e9-042100a593b6", 00:22:18.687 "strip_size_kb": 64, 00:22:18.687 "state": "online", 00:22:18.687 "raid_level": "raid5f", 00:22:18.687 "superblock": true, 00:22:18.687 "num_base_bdevs": 4, 00:22:18.687 "num_base_bdevs_discovered": 4, 00:22:18.687 "num_base_bdevs_operational": 4, 00:22:18.687 "process": { 00:22:18.687 "type": "rebuild", 00:22:18.687 "target": "spare", 00:22:18.687 "progress": { 00:22:18.687 "blocks": 109440, 00:22:18.687 "percent": 57 00:22:18.687 } 00:22:18.687 }, 00:22:18.687 "base_bdevs_list": [ 00:22:18.687 { 00:22:18.687 "name": "spare", 00:22:18.687 "uuid": "29bd81bb-213f-5f09-b759-0a266bca1841", 00:22:18.687 "is_configured": true, 00:22:18.687 "data_offset": 2048, 00:22:18.687 "data_size": 63488 00:22:18.687 }, 00:22:18.687 { 00:22:18.687 "name": "BaseBdev2", 00:22:18.687 "uuid": "a9f20d42-768b-5d67-b38f-90594f2ffd9f", 00:22:18.687 "is_configured": true, 00:22:18.687 "data_offset": 2048, 00:22:18.687 "data_size": 63488 00:22:18.687 }, 00:22:18.687 { 00:22:18.687 "name": "BaseBdev3", 00:22:18.687 "uuid": "9f8884a8-3f66-5075-827d-ece8615f9658", 00:22:18.687 "is_configured": true, 00:22:18.687 "data_offset": 2048, 00:22:18.687 "data_size": 63488 00:22:18.687 }, 00:22:18.687 { 00:22:18.687 "name": "BaseBdev4", 00:22:18.687 "uuid": "b6486a0c-eaa1-5b4f-b4b9-f7da86a357a6", 00:22:18.687 "is_configured": true, 00:22:18.687 "data_offset": 2048, 00:22:18.687 "data_size": 63488 00:22:18.687 } 00:22:18.687 ] 00:22:18.687 }' 00:22:18.687 06:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:18.944 06:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:18.944 06:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:18.944 06:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:18.944 06:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:19.878 06:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:19.878 06:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:19.878 06:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:19.878 06:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:19.878 06:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:19.878 06:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:19.878 06:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.878 06:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.878 06:48:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.878 06:48:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.878 06:48:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.878 06:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:19.878 "name": "raid_bdev1", 00:22:19.878 "uuid": "75c6750b-1802-44d3-a6e9-042100a593b6", 00:22:19.878 "strip_size_kb": 64, 00:22:19.878 "state": "online", 00:22:19.878 "raid_level": "raid5f", 00:22:19.878 "superblock": true, 00:22:19.878 "num_base_bdevs": 4, 00:22:19.878 "num_base_bdevs_discovered": 4, 00:22:19.878 "num_base_bdevs_operational": 4, 00:22:19.878 "process": { 00:22:19.878 "type": "rebuild", 00:22:19.878 "target": "spare", 00:22:19.878 "progress": { 00:22:19.878 "blocks": 130560, 00:22:19.878 "percent": 68 00:22:19.878 } 00:22:19.878 }, 00:22:19.878 "base_bdevs_list": [ 00:22:19.878 { 00:22:19.878 "name": "spare", 00:22:19.878 "uuid": "29bd81bb-213f-5f09-b759-0a266bca1841", 00:22:19.878 "is_configured": true, 00:22:19.878 "data_offset": 2048, 00:22:19.878 "data_size": 63488 00:22:19.878 }, 00:22:19.878 { 00:22:19.878 "name": "BaseBdev2", 00:22:19.878 "uuid": "a9f20d42-768b-5d67-b38f-90594f2ffd9f", 00:22:19.878 "is_configured": true, 00:22:19.878 "data_offset": 2048, 00:22:19.878 "data_size": 63488 00:22:19.878 }, 00:22:19.878 { 00:22:19.878 "name": "BaseBdev3", 00:22:19.878 "uuid": "9f8884a8-3f66-5075-827d-ece8615f9658", 00:22:19.878 "is_configured": true, 00:22:19.878 "data_offset": 2048, 00:22:19.878 "data_size": 63488 00:22:19.878 }, 00:22:19.878 { 00:22:19.878 "name": "BaseBdev4", 00:22:19.878 "uuid": "b6486a0c-eaa1-5b4f-b4b9-f7da86a357a6", 00:22:19.878 "is_configured": true, 00:22:19.878 "data_offset": 2048, 00:22:19.878 "data_size": 63488 00:22:19.878 } 00:22:19.878 ] 00:22:19.878 }' 00:22:19.878 06:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:19.878 06:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:19.878 06:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:20.135 06:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:20.135 06:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:21.070 06:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:21.070 06:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:21.070 06:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:21.070 06:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:21.070 06:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:21.070 06:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:21.070 06:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.070 06:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.070 06:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.070 06:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.070 06:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.070 06:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:21.070 "name": "raid_bdev1", 00:22:21.070 "uuid": "75c6750b-1802-44d3-a6e9-042100a593b6", 00:22:21.070 "strip_size_kb": 64, 00:22:21.070 "state": "online", 00:22:21.070 "raid_level": "raid5f", 00:22:21.070 "superblock": true, 00:22:21.070 "num_base_bdevs": 4, 00:22:21.070 "num_base_bdevs_discovered": 4, 00:22:21.070 "num_base_bdevs_operational": 4, 00:22:21.070 "process": { 00:22:21.070 "type": "rebuild", 00:22:21.070 "target": "spare", 00:22:21.070 "progress": { 00:22:21.070 "blocks": 153600, 00:22:21.070 "percent": 80 00:22:21.070 } 00:22:21.070 }, 00:22:21.070 "base_bdevs_list": [ 00:22:21.070 { 00:22:21.070 "name": "spare", 00:22:21.070 "uuid": "29bd81bb-213f-5f09-b759-0a266bca1841", 00:22:21.070 "is_configured": true, 00:22:21.070 "data_offset": 2048, 00:22:21.070 "data_size": 63488 00:22:21.070 }, 00:22:21.070 { 00:22:21.070 "name": "BaseBdev2", 00:22:21.070 "uuid": "a9f20d42-768b-5d67-b38f-90594f2ffd9f", 00:22:21.070 "is_configured": true, 00:22:21.070 "data_offset": 2048, 00:22:21.070 "data_size": 63488 00:22:21.070 }, 00:22:21.070 { 00:22:21.070 "name": "BaseBdev3", 00:22:21.070 "uuid": "9f8884a8-3f66-5075-827d-ece8615f9658", 00:22:21.070 "is_configured": true, 00:22:21.070 "data_offset": 2048, 00:22:21.070 "data_size": 63488 00:22:21.070 }, 00:22:21.070 { 00:22:21.070 "name": "BaseBdev4", 00:22:21.070 "uuid": "b6486a0c-eaa1-5b4f-b4b9-f7da86a357a6", 00:22:21.070 "is_configured": true, 00:22:21.070 "data_offset": 2048, 00:22:21.070 "data_size": 63488 00:22:21.070 } 00:22:21.070 ] 00:22:21.070 }' 00:22:21.070 06:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:21.070 06:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:21.070 06:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:21.070 06:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:21.070 06:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:22.477 06:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:22.477 06:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:22.477 06:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:22.477 06:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:22.477 06:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:22.477 06:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:22.477 06:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.477 06:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.477 06:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.477 06:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.477 06:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.477 06:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:22.477 "name": "raid_bdev1", 00:22:22.477 "uuid": "75c6750b-1802-44d3-a6e9-042100a593b6", 00:22:22.477 "strip_size_kb": 64, 00:22:22.477 "state": "online", 00:22:22.477 "raid_level": "raid5f", 00:22:22.477 "superblock": true, 00:22:22.477 "num_base_bdevs": 4, 00:22:22.477 "num_base_bdevs_discovered": 4, 00:22:22.477 "num_base_bdevs_operational": 4, 00:22:22.477 "process": { 00:22:22.477 "type": "rebuild", 00:22:22.477 "target": "spare", 00:22:22.477 "progress": { 00:22:22.477 "blocks": 174720, 00:22:22.477 "percent": 91 00:22:22.477 } 00:22:22.477 }, 00:22:22.477 "base_bdevs_list": [ 00:22:22.477 { 00:22:22.477 "name": "spare", 00:22:22.477 "uuid": "29bd81bb-213f-5f09-b759-0a266bca1841", 00:22:22.477 "is_configured": true, 00:22:22.477 "data_offset": 2048, 00:22:22.477 "data_size": 63488 00:22:22.477 }, 00:22:22.477 { 00:22:22.477 "name": "BaseBdev2", 00:22:22.477 "uuid": "a9f20d42-768b-5d67-b38f-90594f2ffd9f", 00:22:22.477 "is_configured": true, 00:22:22.477 "data_offset": 2048, 00:22:22.477 "data_size": 63488 00:22:22.477 }, 00:22:22.477 { 00:22:22.477 "name": "BaseBdev3", 00:22:22.477 "uuid": "9f8884a8-3f66-5075-827d-ece8615f9658", 00:22:22.477 "is_configured": true, 00:22:22.477 "data_offset": 2048, 00:22:22.477 "data_size": 63488 00:22:22.477 }, 00:22:22.477 { 00:22:22.477 "name": "BaseBdev4", 00:22:22.477 "uuid": "b6486a0c-eaa1-5b4f-b4b9-f7da86a357a6", 00:22:22.477 "is_configured": true, 00:22:22.477 "data_offset": 2048, 00:22:22.477 "data_size": 63488 00:22:22.477 } 00:22:22.477 ] 00:22:22.477 }' 00:22:22.477 06:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:22.477 06:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:22.477 06:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:22.477 06:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:22.477 06:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:23.045 [2024-12-06 06:48:41.534736] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:23.045 [2024-12-06 06:48:41.534843] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:23.045 [2024-12-06 06:48:41.535049] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:23.304 06:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:23.304 06:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:23.304 06:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:23.304 06:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:23.304 06:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:23.304 06:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:23.304 06:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.304 06:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.304 06:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.304 06:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.304 06:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.304 06:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:23.304 "name": "raid_bdev1", 00:22:23.304 "uuid": "75c6750b-1802-44d3-a6e9-042100a593b6", 00:22:23.304 "strip_size_kb": 64, 00:22:23.304 "state": "online", 00:22:23.304 "raid_level": "raid5f", 00:22:23.304 "superblock": true, 00:22:23.304 "num_base_bdevs": 4, 00:22:23.304 "num_base_bdevs_discovered": 4, 00:22:23.304 "num_base_bdevs_operational": 4, 00:22:23.304 "base_bdevs_list": [ 00:22:23.304 { 00:22:23.304 "name": "spare", 00:22:23.304 "uuid": "29bd81bb-213f-5f09-b759-0a266bca1841", 00:22:23.304 "is_configured": true, 00:22:23.304 "data_offset": 2048, 00:22:23.304 "data_size": 63488 00:22:23.304 }, 00:22:23.304 { 00:22:23.304 "name": "BaseBdev2", 00:22:23.304 "uuid": "a9f20d42-768b-5d67-b38f-90594f2ffd9f", 00:22:23.304 "is_configured": true, 00:22:23.304 "data_offset": 2048, 00:22:23.304 "data_size": 63488 00:22:23.304 }, 00:22:23.304 { 00:22:23.304 "name": "BaseBdev3", 00:22:23.304 "uuid": "9f8884a8-3f66-5075-827d-ece8615f9658", 00:22:23.304 "is_configured": true, 00:22:23.304 "data_offset": 2048, 00:22:23.304 "data_size": 63488 00:22:23.304 }, 00:22:23.304 { 00:22:23.304 "name": "BaseBdev4", 00:22:23.304 "uuid": "b6486a0c-eaa1-5b4f-b4b9-f7da86a357a6", 00:22:23.304 "is_configured": true, 00:22:23.304 "data_offset": 2048, 00:22:23.304 "data_size": 63488 00:22:23.304 } 00:22:23.304 ] 00:22:23.304 }' 00:22:23.304 06:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:23.564 06:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:23.564 06:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:23.564 "name": "raid_bdev1", 00:22:23.564 "uuid": "75c6750b-1802-44d3-a6e9-042100a593b6", 00:22:23.564 "strip_size_kb": 64, 00:22:23.564 "state": "online", 00:22:23.564 "raid_level": "raid5f", 00:22:23.564 "superblock": true, 00:22:23.564 "num_base_bdevs": 4, 00:22:23.564 "num_base_bdevs_discovered": 4, 00:22:23.564 "num_base_bdevs_operational": 4, 00:22:23.564 "base_bdevs_list": [ 00:22:23.564 { 00:22:23.564 "name": "spare", 00:22:23.564 "uuid": "29bd81bb-213f-5f09-b759-0a266bca1841", 00:22:23.564 "is_configured": true, 00:22:23.564 "data_offset": 2048, 00:22:23.564 "data_size": 63488 00:22:23.564 }, 00:22:23.564 { 00:22:23.564 "name": "BaseBdev2", 00:22:23.564 "uuid": "a9f20d42-768b-5d67-b38f-90594f2ffd9f", 00:22:23.564 "is_configured": true, 00:22:23.564 "data_offset": 2048, 00:22:23.564 "data_size": 63488 00:22:23.564 }, 00:22:23.564 { 00:22:23.564 "name": "BaseBdev3", 00:22:23.564 "uuid": "9f8884a8-3f66-5075-827d-ece8615f9658", 00:22:23.564 "is_configured": true, 00:22:23.564 "data_offset": 2048, 00:22:23.564 "data_size": 63488 00:22:23.564 }, 00:22:23.564 { 00:22:23.564 "name": "BaseBdev4", 00:22:23.564 "uuid": "b6486a0c-eaa1-5b4f-b4b9-f7da86a357a6", 00:22:23.564 "is_configured": true, 00:22:23.564 "data_offset": 2048, 00:22:23.564 "data_size": 63488 00:22:23.564 } 00:22:23.564 ] 00:22:23.564 }' 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.564 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.823 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:23.823 "name": "raid_bdev1", 00:22:23.823 "uuid": "75c6750b-1802-44d3-a6e9-042100a593b6", 00:22:23.823 "strip_size_kb": 64, 00:22:23.823 "state": "online", 00:22:23.823 "raid_level": "raid5f", 00:22:23.823 "superblock": true, 00:22:23.823 "num_base_bdevs": 4, 00:22:23.823 "num_base_bdevs_discovered": 4, 00:22:23.823 "num_base_bdevs_operational": 4, 00:22:23.823 "base_bdevs_list": [ 00:22:23.823 { 00:22:23.823 "name": "spare", 00:22:23.823 "uuid": "29bd81bb-213f-5f09-b759-0a266bca1841", 00:22:23.823 "is_configured": true, 00:22:23.823 "data_offset": 2048, 00:22:23.823 "data_size": 63488 00:22:23.823 }, 00:22:23.823 { 00:22:23.823 "name": "BaseBdev2", 00:22:23.823 "uuid": "a9f20d42-768b-5d67-b38f-90594f2ffd9f", 00:22:23.823 "is_configured": true, 00:22:23.823 "data_offset": 2048, 00:22:23.823 "data_size": 63488 00:22:23.823 }, 00:22:23.823 { 00:22:23.823 "name": "BaseBdev3", 00:22:23.823 "uuid": "9f8884a8-3f66-5075-827d-ece8615f9658", 00:22:23.823 "is_configured": true, 00:22:23.823 "data_offset": 2048, 00:22:23.823 "data_size": 63488 00:22:23.823 }, 00:22:23.823 { 00:22:23.823 "name": "BaseBdev4", 00:22:23.823 "uuid": "b6486a0c-eaa1-5b4f-b4b9-f7da86a357a6", 00:22:23.823 "is_configured": true, 00:22:23.823 "data_offset": 2048, 00:22:23.823 "data_size": 63488 00:22:23.823 } 00:22:23.823 ] 00:22:23.823 }' 00:22:23.823 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:23.823 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.083 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:24.083 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.083 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.083 [2024-12-06 06:48:42.714336] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:24.083 [2024-12-06 06:48:42.714375] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:24.083 [2024-12-06 06:48:42.714483] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:24.083 [2024-12-06 06:48:42.714628] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:24.083 [2024-12-06 06:48:42.714661] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:24.083 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.083 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.083 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:22:24.083 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.083 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.342 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.342 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:24.342 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:24.342 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:24.342 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:24.342 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:24.342 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:24.342 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:24.342 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:24.342 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:24.342 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:22:24.342 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:24.342 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:24.342 06:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:24.601 /dev/nbd0 00:22:24.601 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:24.601 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:24.601 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:24.601 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:22:24.601 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:24.601 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:24.601 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:24.601 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:22:24.601 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:24.601 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:24.601 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:24.601 1+0 records in 00:22:24.601 1+0 records out 00:22:24.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527086 s, 7.8 MB/s 00:22:24.601 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:24.601 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:22:24.601 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:24.601 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:24.601 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:22:24.601 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:24.601 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:24.601 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:24.860 /dev/nbd1 00:22:25.119 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:25.119 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:25.119 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:25.119 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:22:25.119 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:25.119 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:25.119 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:25.119 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:22:25.119 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:25.119 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:25.119 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:25.119 1+0 records in 00:22:25.119 1+0 records out 00:22:25.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431709 s, 9.5 MB/s 00:22:25.119 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:25.119 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:22:25.119 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:25.119 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:25.119 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:22:25.119 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:25.119 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:25.119 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:25.119 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:25.120 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:25.120 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:25.120 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:25.120 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:22:25.120 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:25.120 06:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:25.427 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:25.427 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:25.427 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:25.427 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:25.427 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:25.427 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:25.427 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:25.427 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:25.427 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:25.427 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:25.994 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:25.994 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:25.994 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:25.994 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:25.994 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:25.994 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:25.994 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:25.994 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:25.994 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:22:25.994 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:22:25.995 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.995 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.995 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.995 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:25.995 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.995 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.995 [2024-12-06 06:48:44.376380] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:25.995 [2024-12-06 06:48:44.376443] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.995 [2024-12-06 06:48:44.376476] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:22:25.995 [2024-12-06 06:48:44.376492] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.995 [2024-12-06 06:48:44.379431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.995 [2024-12-06 06:48:44.379477] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:25.995 [2024-12-06 06:48:44.379618] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:25.995 [2024-12-06 06:48:44.379687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:25.995 [2024-12-06 06:48:44.379886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:25.995 [2024-12-06 06:48:44.380033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:25.995 [2024-12-06 06:48:44.380173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:25.995 spare 00:22:25.995 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.995 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:22:25.995 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.995 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.995 [2024-12-06 06:48:44.480307] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:22:25.995 [2024-12-06 06:48:44.480548] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:25.995 [2024-12-06 06:48:44.480974] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:22:25.995 [2024-12-06 06:48:44.487453] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:22:25.995 [2024-12-06 06:48:44.487479] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:22:25.995 [2024-12-06 06:48:44.487768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:25.995 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.995 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:25.995 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:25.995 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:25.995 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:25.995 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:25.995 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:25.995 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:25.995 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:25.995 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:25.995 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:25.995 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.995 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.995 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.995 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.995 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.995 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:25.995 "name": "raid_bdev1", 00:22:25.995 "uuid": "75c6750b-1802-44d3-a6e9-042100a593b6", 00:22:25.995 "strip_size_kb": 64, 00:22:25.995 "state": "online", 00:22:25.995 "raid_level": "raid5f", 00:22:25.995 "superblock": true, 00:22:25.995 "num_base_bdevs": 4, 00:22:25.995 "num_base_bdevs_discovered": 4, 00:22:25.995 "num_base_bdevs_operational": 4, 00:22:25.995 "base_bdevs_list": [ 00:22:25.995 { 00:22:25.995 "name": "spare", 00:22:25.995 "uuid": "29bd81bb-213f-5f09-b759-0a266bca1841", 00:22:25.995 "is_configured": true, 00:22:25.995 "data_offset": 2048, 00:22:25.995 "data_size": 63488 00:22:25.995 }, 00:22:25.995 { 00:22:25.995 "name": "BaseBdev2", 00:22:25.995 "uuid": "a9f20d42-768b-5d67-b38f-90594f2ffd9f", 00:22:25.995 "is_configured": true, 00:22:25.995 "data_offset": 2048, 00:22:25.995 "data_size": 63488 00:22:25.995 }, 00:22:25.995 { 00:22:25.995 "name": "BaseBdev3", 00:22:25.995 "uuid": "9f8884a8-3f66-5075-827d-ece8615f9658", 00:22:25.995 "is_configured": true, 00:22:25.995 "data_offset": 2048, 00:22:25.995 "data_size": 63488 00:22:25.995 }, 00:22:25.995 { 00:22:25.995 "name": "BaseBdev4", 00:22:25.995 "uuid": "b6486a0c-eaa1-5b4f-b4b9-f7da86a357a6", 00:22:25.995 "is_configured": true, 00:22:25.995 "data_offset": 2048, 00:22:25.995 "data_size": 63488 00:22:25.995 } 00:22:25.995 ] 00:22:25.995 }' 00:22:25.995 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:25.995 06:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.563 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:26.563 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:26.563 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:26.563 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:26.563 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:26.563 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.563 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.563 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.563 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.563 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.563 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:26.563 "name": "raid_bdev1", 00:22:26.563 "uuid": "75c6750b-1802-44d3-a6e9-042100a593b6", 00:22:26.563 "strip_size_kb": 64, 00:22:26.563 "state": "online", 00:22:26.563 "raid_level": "raid5f", 00:22:26.563 "superblock": true, 00:22:26.563 "num_base_bdevs": 4, 00:22:26.563 "num_base_bdevs_discovered": 4, 00:22:26.563 "num_base_bdevs_operational": 4, 00:22:26.563 "base_bdevs_list": [ 00:22:26.563 { 00:22:26.563 "name": "spare", 00:22:26.563 "uuid": "29bd81bb-213f-5f09-b759-0a266bca1841", 00:22:26.563 "is_configured": true, 00:22:26.563 "data_offset": 2048, 00:22:26.563 "data_size": 63488 00:22:26.563 }, 00:22:26.563 { 00:22:26.563 "name": "BaseBdev2", 00:22:26.563 "uuid": "a9f20d42-768b-5d67-b38f-90594f2ffd9f", 00:22:26.563 "is_configured": true, 00:22:26.563 "data_offset": 2048, 00:22:26.563 "data_size": 63488 00:22:26.563 }, 00:22:26.563 { 00:22:26.563 "name": "BaseBdev3", 00:22:26.563 "uuid": "9f8884a8-3f66-5075-827d-ece8615f9658", 00:22:26.563 "is_configured": true, 00:22:26.563 "data_offset": 2048, 00:22:26.563 "data_size": 63488 00:22:26.563 }, 00:22:26.563 { 00:22:26.563 "name": "BaseBdev4", 00:22:26.563 "uuid": "b6486a0c-eaa1-5b4f-b4b9-f7da86a357a6", 00:22:26.563 "is_configured": true, 00:22:26.563 "data_offset": 2048, 00:22:26.563 "data_size": 63488 00:22:26.563 } 00:22:26.563 ] 00:22:26.563 }' 00:22:26.563 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:26.563 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:26.563 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:26.563 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:26.563 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:26.563 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.563 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.563 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.563 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.821 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:22:26.821 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:26.821 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.821 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.821 [2024-12-06 06:48:45.239435] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:26.821 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.821 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:26.821 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:26.821 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:26.821 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:26.821 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:26.821 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:26.821 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:26.821 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:26.821 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:26.821 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:26.821 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.821 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.821 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.821 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.821 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.821 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:26.821 "name": "raid_bdev1", 00:22:26.821 "uuid": "75c6750b-1802-44d3-a6e9-042100a593b6", 00:22:26.821 "strip_size_kb": 64, 00:22:26.821 "state": "online", 00:22:26.821 "raid_level": "raid5f", 00:22:26.821 "superblock": true, 00:22:26.821 "num_base_bdevs": 4, 00:22:26.821 "num_base_bdevs_discovered": 3, 00:22:26.821 "num_base_bdevs_operational": 3, 00:22:26.821 "base_bdevs_list": [ 00:22:26.821 { 00:22:26.821 "name": null, 00:22:26.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.821 "is_configured": false, 00:22:26.821 "data_offset": 0, 00:22:26.821 "data_size": 63488 00:22:26.821 }, 00:22:26.821 { 00:22:26.821 "name": "BaseBdev2", 00:22:26.821 "uuid": "a9f20d42-768b-5d67-b38f-90594f2ffd9f", 00:22:26.821 "is_configured": true, 00:22:26.821 "data_offset": 2048, 00:22:26.821 "data_size": 63488 00:22:26.821 }, 00:22:26.822 { 00:22:26.822 "name": "BaseBdev3", 00:22:26.822 "uuid": "9f8884a8-3f66-5075-827d-ece8615f9658", 00:22:26.822 "is_configured": true, 00:22:26.822 "data_offset": 2048, 00:22:26.822 "data_size": 63488 00:22:26.822 }, 00:22:26.822 { 00:22:26.822 "name": "BaseBdev4", 00:22:26.822 "uuid": "b6486a0c-eaa1-5b4f-b4b9-f7da86a357a6", 00:22:26.822 "is_configured": true, 00:22:26.822 "data_offset": 2048, 00:22:26.822 "data_size": 63488 00:22:26.822 } 00:22:26.822 ] 00:22:26.822 }' 00:22:26.822 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:26.822 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.387 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:27.387 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.387 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.387 [2024-12-06 06:48:45.795637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:27.387 [2024-12-06 06:48:45.795896] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:27.387 [2024-12-06 06:48:45.795927] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:27.387 [2024-12-06 06:48:45.795987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:27.388 [2024-12-06 06:48:45.809880] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:22:27.388 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.388 06:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:22:27.388 [2024-12-06 06:48:45.818851] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:28.323 06:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:28.323 06:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:28.323 06:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:28.323 06:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:28.323 06:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:28.323 06:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.323 06:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.323 06:48:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.323 06:48:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:28.323 06:48:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.323 06:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:28.323 "name": "raid_bdev1", 00:22:28.323 "uuid": "75c6750b-1802-44d3-a6e9-042100a593b6", 00:22:28.323 "strip_size_kb": 64, 00:22:28.323 "state": "online", 00:22:28.323 "raid_level": "raid5f", 00:22:28.323 "superblock": true, 00:22:28.323 "num_base_bdevs": 4, 00:22:28.323 "num_base_bdevs_discovered": 4, 00:22:28.323 "num_base_bdevs_operational": 4, 00:22:28.323 "process": { 00:22:28.323 "type": "rebuild", 00:22:28.323 "target": "spare", 00:22:28.323 "progress": { 00:22:28.323 "blocks": 17280, 00:22:28.323 "percent": 9 00:22:28.323 } 00:22:28.323 }, 00:22:28.323 "base_bdevs_list": [ 00:22:28.323 { 00:22:28.323 "name": "spare", 00:22:28.323 "uuid": "29bd81bb-213f-5f09-b759-0a266bca1841", 00:22:28.323 "is_configured": true, 00:22:28.323 "data_offset": 2048, 00:22:28.323 "data_size": 63488 00:22:28.323 }, 00:22:28.323 { 00:22:28.323 "name": "BaseBdev2", 00:22:28.323 "uuid": "a9f20d42-768b-5d67-b38f-90594f2ffd9f", 00:22:28.323 "is_configured": true, 00:22:28.323 "data_offset": 2048, 00:22:28.323 "data_size": 63488 00:22:28.323 }, 00:22:28.323 { 00:22:28.323 "name": "BaseBdev3", 00:22:28.323 "uuid": "9f8884a8-3f66-5075-827d-ece8615f9658", 00:22:28.323 "is_configured": true, 00:22:28.323 "data_offset": 2048, 00:22:28.323 "data_size": 63488 00:22:28.323 }, 00:22:28.323 { 00:22:28.323 "name": "BaseBdev4", 00:22:28.323 "uuid": "b6486a0c-eaa1-5b4f-b4b9-f7da86a357a6", 00:22:28.323 "is_configured": true, 00:22:28.323 "data_offset": 2048, 00:22:28.323 "data_size": 63488 00:22:28.323 } 00:22:28.323 ] 00:22:28.323 }' 00:22:28.323 06:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:28.323 06:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:28.323 06:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:28.582 06:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:28.582 06:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:22:28.582 06:48:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.582 06:48:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:28.582 [2024-12-06 06:48:46.980660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:28.582 [2024-12-06 06:48:47.031902] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:28.582 [2024-12-06 06:48:47.032201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:28.582 [2024-12-06 06:48:47.032248] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:28.582 [2024-12-06 06:48:47.032268] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:28.582 06:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.582 06:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:28.582 06:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:28.582 06:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:28.582 06:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:28.582 06:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:28.582 06:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:28.582 06:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:28.582 06:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:28.582 06:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:28.582 06:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:28.582 06:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.582 06:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.582 06:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.582 06:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:28.582 06:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.582 06:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:28.582 "name": "raid_bdev1", 00:22:28.582 "uuid": "75c6750b-1802-44d3-a6e9-042100a593b6", 00:22:28.582 "strip_size_kb": 64, 00:22:28.582 "state": "online", 00:22:28.582 "raid_level": "raid5f", 00:22:28.582 "superblock": true, 00:22:28.582 "num_base_bdevs": 4, 00:22:28.582 "num_base_bdevs_discovered": 3, 00:22:28.582 "num_base_bdevs_operational": 3, 00:22:28.582 "base_bdevs_list": [ 00:22:28.582 { 00:22:28.582 "name": null, 00:22:28.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.582 "is_configured": false, 00:22:28.582 "data_offset": 0, 00:22:28.582 "data_size": 63488 00:22:28.582 }, 00:22:28.582 { 00:22:28.582 "name": "BaseBdev2", 00:22:28.582 "uuid": "a9f20d42-768b-5d67-b38f-90594f2ffd9f", 00:22:28.582 "is_configured": true, 00:22:28.582 "data_offset": 2048, 00:22:28.582 "data_size": 63488 00:22:28.582 }, 00:22:28.582 { 00:22:28.582 "name": "BaseBdev3", 00:22:28.582 "uuid": "9f8884a8-3f66-5075-827d-ece8615f9658", 00:22:28.582 "is_configured": true, 00:22:28.582 "data_offset": 2048, 00:22:28.582 "data_size": 63488 00:22:28.582 }, 00:22:28.582 { 00:22:28.582 "name": "BaseBdev4", 00:22:28.582 "uuid": "b6486a0c-eaa1-5b4f-b4b9-f7da86a357a6", 00:22:28.582 "is_configured": true, 00:22:28.582 "data_offset": 2048, 00:22:28.582 "data_size": 63488 00:22:28.582 } 00:22:28.582 ] 00:22:28.582 }' 00:22:28.582 06:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:28.582 06:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.150 06:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:29.150 06:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.150 06:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.150 [2024-12-06 06:48:47.596911] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:29.150 [2024-12-06 06:48:47.596993] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:29.150 [2024-12-06 06:48:47.597031] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:22:29.150 [2024-12-06 06:48:47.597051] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:29.150 [2024-12-06 06:48:47.597727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:29.150 [2024-12-06 06:48:47.597774] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:29.150 [2024-12-06 06:48:47.597913] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:29.150 [2024-12-06 06:48:47.597939] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:29.150 [2024-12-06 06:48:47.597953] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:29.150 [2024-12-06 06:48:47.597996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:29.150 [2024-12-06 06:48:47.611417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:22:29.150 spare 00:22:29.150 06:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.150 06:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:22:29.150 [2024-12-06 06:48:47.620264] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:30.098 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:30.098 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:30.098 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:30.098 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:30.098 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:30.098 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.098 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.098 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.098 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.098 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.098 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:30.098 "name": "raid_bdev1", 00:22:30.098 "uuid": "75c6750b-1802-44d3-a6e9-042100a593b6", 00:22:30.098 "strip_size_kb": 64, 00:22:30.098 "state": "online", 00:22:30.098 "raid_level": "raid5f", 00:22:30.098 "superblock": true, 00:22:30.098 "num_base_bdevs": 4, 00:22:30.098 "num_base_bdevs_discovered": 4, 00:22:30.098 "num_base_bdevs_operational": 4, 00:22:30.098 "process": { 00:22:30.098 "type": "rebuild", 00:22:30.098 "target": "spare", 00:22:30.098 "progress": { 00:22:30.098 "blocks": 17280, 00:22:30.098 "percent": 9 00:22:30.098 } 00:22:30.098 }, 00:22:30.098 "base_bdevs_list": [ 00:22:30.098 { 00:22:30.098 "name": "spare", 00:22:30.098 "uuid": "29bd81bb-213f-5f09-b759-0a266bca1841", 00:22:30.098 "is_configured": true, 00:22:30.098 "data_offset": 2048, 00:22:30.098 "data_size": 63488 00:22:30.098 }, 00:22:30.098 { 00:22:30.098 "name": "BaseBdev2", 00:22:30.098 "uuid": "a9f20d42-768b-5d67-b38f-90594f2ffd9f", 00:22:30.098 "is_configured": true, 00:22:30.098 "data_offset": 2048, 00:22:30.098 "data_size": 63488 00:22:30.098 }, 00:22:30.098 { 00:22:30.098 "name": "BaseBdev3", 00:22:30.098 "uuid": "9f8884a8-3f66-5075-827d-ece8615f9658", 00:22:30.098 "is_configured": true, 00:22:30.098 "data_offset": 2048, 00:22:30.098 "data_size": 63488 00:22:30.098 }, 00:22:30.098 { 00:22:30.098 "name": "BaseBdev4", 00:22:30.098 "uuid": "b6486a0c-eaa1-5b4f-b4b9-f7da86a357a6", 00:22:30.098 "is_configured": true, 00:22:30.098 "data_offset": 2048, 00:22:30.098 "data_size": 63488 00:22:30.098 } 00:22:30.098 ] 00:22:30.098 }' 00:22:30.098 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:30.098 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:30.098 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:30.356 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:30.356 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:22:30.356 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.356 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.356 [2024-12-06 06:48:48.790247] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:30.356 [2024-12-06 06:48:48.833091] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:30.356 [2024-12-06 06:48:48.833400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:30.356 [2024-12-06 06:48:48.833441] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:30.356 [2024-12-06 06:48:48.833455] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:30.356 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.356 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:30.356 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:30.356 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:30.356 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:30.356 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:30.356 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:30.356 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:30.356 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:30.356 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:30.356 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:30.356 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.356 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.356 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.356 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.356 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.356 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:30.356 "name": "raid_bdev1", 00:22:30.356 "uuid": "75c6750b-1802-44d3-a6e9-042100a593b6", 00:22:30.356 "strip_size_kb": 64, 00:22:30.356 "state": "online", 00:22:30.356 "raid_level": "raid5f", 00:22:30.356 "superblock": true, 00:22:30.356 "num_base_bdevs": 4, 00:22:30.356 "num_base_bdevs_discovered": 3, 00:22:30.356 "num_base_bdevs_operational": 3, 00:22:30.356 "base_bdevs_list": [ 00:22:30.356 { 00:22:30.356 "name": null, 00:22:30.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.356 "is_configured": false, 00:22:30.356 "data_offset": 0, 00:22:30.356 "data_size": 63488 00:22:30.356 }, 00:22:30.356 { 00:22:30.356 "name": "BaseBdev2", 00:22:30.356 "uuid": "a9f20d42-768b-5d67-b38f-90594f2ffd9f", 00:22:30.356 "is_configured": true, 00:22:30.356 "data_offset": 2048, 00:22:30.356 "data_size": 63488 00:22:30.356 }, 00:22:30.356 { 00:22:30.356 "name": "BaseBdev3", 00:22:30.356 "uuid": "9f8884a8-3f66-5075-827d-ece8615f9658", 00:22:30.356 "is_configured": true, 00:22:30.356 "data_offset": 2048, 00:22:30.356 "data_size": 63488 00:22:30.356 }, 00:22:30.356 { 00:22:30.356 "name": "BaseBdev4", 00:22:30.356 "uuid": "b6486a0c-eaa1-5b4f-b4b9-f7da86a357a6", 00:22:30.356 "is_configured": true, 00:22:30.356 "data_offset": 2048, 00:22:30.356 "data_size": 63488 00:22:30.356 } 00:22:30.356 ] 00:22:30.356 }' 00:22:30.356 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:30.356 06:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.924 06:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:30.924 06:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:30.924 06:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:30.924 06:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:30.924 06:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:30.924 06:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.924 06:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.924 06:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.924 06:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.924 06:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.924 06:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:30.924 "name": "raid_bdev1", 00:22:30.924 "uuid": "75c6750b-1802-44d3-a6e9-042100a593b6", 00:22:30.924 "strip_size_kb": 64, 00:22:30.924 "state": "online", 00:22:30.924 "raid_level": "raid5f", 00:22:30.924 "superblock": true, 00:22:30.924 "num_base_bdevs": 4, 00:22:30.924 "num_base_bdevs_discovered": 3, 00:22:30.924 "num_base_bdevs_operational": 3, 00:22:30.924 "base_bdevs_list": [ 00:22:30.924 { 00:22:30.924 "name": null, 00:22:30.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.924 "is_configured": false, 00:22:30.924 "data_offset": 0, 00:22:30.924 "data_size": 63488 00:22:30.924 }, 00:22:30.924 { 00:22:30.924 "name": "BaseBdev2", 00:22:30.924 "uuid": "a9f20d42-768b-5d67-b38f-90594f2ffd9f", 00:22:30.924 "is_configured": true, 00:22:30.924 "data_offset": 2048, 00:22:30.924 "data_size": 63488 00:22:30.924 }, 00:22:30.924 { 00:22:30.924 "name": "BaseBdev3", 00:22:30.924 "uuid": "9f8884a8-3f66-5075-827d-ece8615f9658", 00:22:30.924 "is_configured": true, 00:22:30.924 "data_offset": 2048, 00:22:30.924 "data_size": 63488 00:22:30.924 }, 00:22:30.924 { 00:22:30.924 "name": "BaseBdev4", 00:22:30.924 "uuid": "b6486a0c-eaa1-5b4f-b4b9-f7da86a357a6", 00:22:30.924 "is_configured": true, 00:22:30.924 "data_offset": 2048, 00:22:30.924 "data_size": 63488 00:22:30.924 } 00:22:30.924 ] 00:22:30.924 }' 00:22:30.924 06:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:30.924 06:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:30.924 06:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:31.182 06:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:31.182 06:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:22:31.182 06:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.182 06:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.182 06:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.182 06:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:31.182 06:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.182 06:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.182 [2024-12-06 06:48:49.592464] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:31.182 [2024-12-06 06:48:49.592690] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:31.182 [2024-12-06 06:48:49.592735] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:22:31.182 [2024-12-06 06:48:49.592754] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:31.182 [2024-12-06 06:48:49.593337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:31.182 [2024-12-06 06:48:49.593369] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:31.182 [2024-12-06 06:48:49.593500] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:31.182 [2024-12-06 06:48:49.593542] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:31.182 [2024-12-06 06:48:49.593563] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:31.182 [2024-12-06 06:48:49.593576] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:22:31.182 BaseBdev1 00:22:31.182 06:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.182 06:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:22:32.116 06:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:32.116 06:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:32.116 06:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:32.116 06:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:32.116 06:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:32.116 06:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:32.117 06:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:32.117 06:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:32.117 06:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:32.117 06:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:32.117 06:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.117 06:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.117 06:48:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.117 06:48:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.117 06:48:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.117 06:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:32.117 "name": "raid_bdev1", 00:22:32.117 "uuid": "75c6750b-1802-44d3-a6e9-042100a593b6", 00:22:32.117 "strip_size_kb": 64, 00:22:32.117 "state": "online", 00:22:32.117 "raid_level": "raid5f", 00:22:32.117 "superblock": true, 00:22:32.117 "num_base_bdevs": 4, 00:22:32.117 "num_base_bdevs_discovered": 3, 00:22:32.117 "num_base_bdevs_operational": 3, 00:22:32.117 "base_bdevs_list": [ 00:22:32.117 { 00:22:32.117 "name": null, 00:22:32.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.117 "is_configured": false, 00:22:32.117 "data_offset": 0, 00:22:32.117 "data_size": 63488 00:22:32.117 }, 00:22:32.117 { 00:22:32.117 "name": "BaseBdev2", 00:22:32.117 "uuid": "a9f20d42-768b-5d67-b38f-90594f2ffd9f", 00:22:32.117 "is_configured": true, 00:22:32.117 "data_offset": 2048, 00:22:32.117 "data_size": 63488 00:22:32.117 }, 00:22:32.117 { 00:22:32.117 "name": "BaseBdev3", 00:22:32.117 "uuid": "9f8884a8-3f66-5075-827d-ece8615f9658", 00:22:32.117 "is_configured": true, 00:22:32.117 "data_offset": 2048, 00:22:32.117 "data_size": 63488 00:22:32.117 }, 00:22:32.117 { 00:22:32.117 "name": "BaseBdev4", 00:22:32.117 "uuid": "b6486a0c-eaa1-5b4f-b4b9-f7da86a357a6", 00:22:32.117 "is_configured": true, 00:22:32.117 "data_offset": 2048, 00:22:32.117 "data_size": 63488 00:22:32.117 } 00:22:32.117 ] 00:22:32.117 }' 00:22:32.117 06:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:32.117 06:48:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:32.685 "name": "raid_bdev1", 00:22:32.685 "uuid": "75c6750b-1802-44d3-a6e9-042100a593b6", 00:22:32.685 "strip_size_kb": 64, 00:22:32.685 "state": "online", 00:22:32.685 "raid_level": "raid5f", 00:22:32.685 "superblock": true, 00:22:32.685 "num_base_bdevs": 4, 00:22:32.685 "num_base_bdevs_discovered": 3, 00:22:32.685 "num_base_bdevs_operational": 3, 00:22:32.685 "base_bdevs_list": [ 00:22:32.685 { 00:22:32.685 "name": null, 00:22:32.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.685 "is_configured": false, 00:22:32.685 "data_offset": 0, 00:22:32.685 "data_size": 63488 00:22:32.685 }, 00:22:32.685 { 00:22:32.685 "name": "BaseBdev2", 00:22:32.685 "uuid": "a9f20d42-768b-5d67-b38f-90594f2ffd9f", 00:22:32.685 "is_configured": true, 00:22:32.685 "data_offset": 2048, 00:22:32.685 "data_size": 63488 00:22:32.685 }, 00:22:32.685 { 00:22:32.685 "name": "BaseBdev3", 00:22:32.685 "uuid": "9f8884a8-3f66-5075-827d-ece8615f9658", 00:22:32.685 "is_configured": true, 00:22:32.685 "data_offset": 2048, 00:22:32.685 "data_size": 63488 00:22:32.685 }, 00:22:32.685 { 00:22:32.685 "name": "BaseBdev4", 00:22:32.685 "uuid": "b6486a0c-eaa1-5b4f-b4b9-f7da86a357a6", 00:22:32.685 "is_configured": true, 00:22:32.685 "data_offset": 2048, 00:22:32.685 "data_size": 63488 00:22:32.685 } 00:22:32.685 ] 00:22:32.685 }' 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.685 [2024-12-06 06:48:51.277112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:32.685 [2024-12-06 06:48:51.277327] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:32.685 [2024-12-06 06:48:51.277351] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:32.685 request: 00:22:32.685 { 00:22:32.685 "base_bdev": "BaseBdev1", 00:22:32.685 "raid_bdev": "raid_bdev1", 00:22:32.685 "method": "bdev_raid_add_base_bdev", 00:22:32.685 "req_id": 1 00:22:32.685 } 00:22:32.685 Got JSON-RPC error response 00:22:32.685 response: 00:22:32.685 { 00:22:32.685 "code": -22, 00:22:32.685 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:32.685 } 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:32.685 06:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:22:34.065 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:34.065 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:34.065 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:34.065 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:34.065 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:34.065 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:34.065 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:34.065 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:34.065 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:34.065 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:34.065 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.066 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.066 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.066 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.066 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.066 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:34.066 "name": "raid_bdev1", 00:22:34.066 "uuid": "75c6750b-1802-44d3-a6e9-042100a593b6", 00:22:34.066 "strip_size_kb": 64, 00:22:34.066 "state": "online", 00:22:34.066 "raid_level": "raid5f", 00:22:34.066 "superblock": true, 00:22:34.066 "num_base_bdevs": 4, 00:22:34.066 "num_base_bdevs_discovered": 3, 00:22:34.066 "num_base_bdevs_operational": 3, 00:22:34.066 "base_bdevs_list": [ 00:22:34.066 { 00:22:34.066 "name": null, 00:22:34.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.066 "is_configured": false, 00:22:34.066 "data_offset": 0, 00:22:34.066 "data_size": 63488 00:22:34.066 }, 00:22:34.066 { 00:22:34.066 "name": "BaseBdev2", 00:22:34.066 "uuid": "a9f20d42-768b-5d67-b38f-90594f2ffd9f", 00:22:34.066 "is_configured": true, 00:22:34.066 "data_offset": 2048, 00:22:34.066 "data_size": 63488 00:22:34.066 }, 00:22:34.066 { 00:22:34.066 "name": "BaseBdev3", 00:22:34.066 "uuid": "9f8884a8-3f66-5075-827d-ece8615f9658", 00:22:34.066 "is_configured": true, 00:22:34.066 "data_offset": 2048, 00:22:34.066 "data_size": 63488 00:22:34.066 }, 00:22:34.066 { 00:22:34.066 "name": "BaseBdev4", 00:22:34.066 "uuid": "b6486a0c-eaa1-5b4f-b4b9-f7da86a357a6", 00:22:34.066 "is_configured": true, 00:22:34.066 "data_offset": 2048, 00:22:34.066 "data_size": 63488 00:22:34.066 } 00:22:34.066 ] 00:22:34.066 }' 00:22:34.066 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:34.066 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.325 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:34.325 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:34.325 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:34.325 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:34.325 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:34.325 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.325 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.325 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.325 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.325 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.325 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:34.325 "name": "raid_bdev1", 00:22:34.325 "uuid": "75c6750b-1802-44d3-a6e9-042100a593b6", 00:22:34.325 "strip_size_kb": 64, 00:22:34.325 "state": "online", 00:22:34.325 "raid_level": "raid5f", 00:22:34.325 "superblock": true, 00:22:34.325 "num_base_bdevs": 4, 00:22:34.325 "num_base_bdevs_discovered": 3, 00:22:34.325 "num_base_bdevs_operational": 3, 00:22:34.325 "base_bdevs_list": [ 00:22:34.325 { 00:22:34.325 "name": null, 00:22:34.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.325 "is_configured": false, 00:22:34.325 "data_offset": 0, 00:22:34.325 "data_size": 63488 00:22:34.325 }, 00:22:34.325 { 00:22:34.325 "name": "BaseBdev2", 00:22:34.325 "uuid": "a9f20d42-768b-5d67-b38f-90594f2ffd9f", 00:22:34.325 "is_configured": true, 00:22:34.325 "data_offset": 2048, 00:22:34.325 "data_size": 63488 00:22:34.325 }, 00:22:34.325 { 00:22:34.325 "name": "BaseBdev3", 00:22:34.325 "uuid": "9f8884a8-3f66-5075-827d-ece8615f9658", 00:22:34.325 "is_configured": true, 00:22:34.325 "data_offset": 2048, 00:22:34.325 "data_size": 63488 00:22:34.325 }, 00:22:34.325 { 00:22:34.326 "name": "BaseBdev4", 00:22:34.326 "uuid": "b6486a0c-eaa1-5b4f-b4b9-f7da86a357a6", 00:22:34.326 "is_configured": true, 00:22:34.326 "data_offset": 2048, 00:22:34.326 "data_size": 63488 00:22:34.326 } 00:22:34.326 ] 00:22:34.326 }' 00:22:34.326 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:34.326 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:34.326 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:34.326 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:34.326 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85704 00:22:34.326 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85704 ']' 00:22:34.326 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85704 00:22:34.326 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:22:34.326 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:34.584 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85704 00:22:34.585 killing process with pid 85704 00:22:34.585 Received shutdown signal, test time was about 60.000000 seconds 00:22:34.585 00:22:34.585 Latency(us) 00:22:34.585 [2024-12-06T06:48:53.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.585 [2024-12-06T06:48:53.232Z] =================================================================================================================== 00:22:34.585 [2024-12-06T06:48:53.232Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:34.585 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:34.585 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:34.585 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85704' 00:22:34.585 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85704 00:22:34.585 [2024-12-06 06:48:52.999236] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:34.585 06:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85704 00:22:34.585 [2024-12-06 06:48:52.999393] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:34.585 [2024-12-06 06:48:52.999497] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:34.585 [2024-12-06 06:48:52.999518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:22:34.843 [2024-12-06 06:48:53.451099] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:36.219 06:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:22:36.219 ************************************ 00:22:36.219 END TEST raid5f_rebuild_test_sb 00:22:36.219 ************************************ 00:22:36.219 00:22:36.219 real 0m28.938s 00:22:36.219 user 0m37.818s 00:22:36.219 sys 0m2.928s 00:22:36.219 06:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:36.219 06:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.219 06:48:54 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:22:36.219 06:48:54 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:22:36.219 06:48:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:36.219 06:48:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:36.219 06:48:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:36.219 ************************************ 00:22:36.219 START TEST raid_state_function_test_sb_4k 00:22:36.219 ************************************ 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:36.219 Process raid pid: 86526 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86526 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86526' 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86526 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86526 ']' 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:36.219 06:48:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:36.219 [2024-12-06 06:48:54.695848] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:22:36.219 [2024-12-06 06:48:54.696473] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.542 [2024-12-06 06:48:54.885093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.542 [2024-12-06 06:48:55.030006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.810 [2024-12-06 06:48:55.255744] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:36.810 [2024-12-06 06:48:55.256012] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:37.068 06:48:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:37.068 06:48:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:22:37.068 06:48:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:37.068 06:48:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.068 06:48:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:37.068 [2024-12-06 06:48:55.653319] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:37.068 [2024-12-06 06:48:55.653388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:37.068 [2024-12-06 06:48:55.653405] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:37.068 [2024-12-06 06:48:55.653421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:37.068 06:48:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.068 06:48:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:37.068 06:48:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:37.068 06:48:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:37.068 06:48:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:37.068 06:48:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:37.068 06:48:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:37.068 06:48:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:37.068 06:48:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:37.068 06:48:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:37.068 06:48:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:37.068 06:48:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.068 06:48:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:37.068 06:48:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.068 06:48:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:37.068 06:48:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.068 06:48:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:37.068 "name": "Existed_Raid", 00:22:37.068 "uuid": "ed4db06e-bfc7-4a9e-a919-a1b2f3c4c0b0", 00:22:37.069 "strip_size_kb": 0, 00:22:37.069 "state": "configuring", 00:22:37.069 "raid_level": "raid1", 00:22:37.069 "superblock": true, 00:22:37.069 "num_base_bdevs": 2, 00:22:37.069 "num_base_bdevs_discovered": 0, 00:22:37.069 "num_base_bdevs_operational": 2, 00:22:37.069 "base_bdevs_list": [ 00:22:37.069 { 00:22:37.069 "name": "BaseBdev1", 00:22:37.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.069 "is_configured": false, 00:22:37.069 "data_offset": 0, 00:22:37.069 "data_size": 0 00:22:37.069 }, 00:22:37.069 { 00:22:37.069 "name": "BaseBdev2", 00:22:37.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.069 "is_configured": false, 00:22:37.069 "data_offset": 0, 00:22:37.069 "data_size": 0 00:22:37.069 } 00:22:37.069 ] 00:22:37.069 }' 00:22:37.327 06:48:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:37.328 06:48:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:37.587 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:37.587 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.587 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:37.587 [2024-12-06 06:48:56.165446] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:37.587 [2024-12-06 06:48:56.165652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:37.587 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.587 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:37.587 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.587 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:37.587 [2024-12-06 06:48:56.173414] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:37.587 [2024-12-06 06:48:56.173468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:37.587 [2024-12-06 06:48:56.173483] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:37.587 [2024-12-06 06:48:56.173502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:37.587 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.587 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:22:37.587 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.587 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:37.587 [2024-12-06 06:48:56.219261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:37.587 BaseBdev1 00:22:37.587 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.587 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:37.587 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:37.587 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:37.587 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:22:37.587 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:37.587 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:37.587 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:37.587 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.587 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:37.847 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.847 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:37.847 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.847 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:37.847 [ 00:22:37.847 { 00:22:37.847 "name": "BaseBdev1", 00:22:37.847 "aliases": [ 00:22:37.847 "1a635ed9-da70-44ce-99ea-80bf7aeb6de9" 00:22:37.847 ], 00:22:37.847 "product_name": "Malloc disk", 00:22:37.847 "block_size": 4096, 00:22:37.847 "num_blocks": 8192, 00:22:37.847 "uuid": "1a635ed9-da70-44ce-99ea-80bf7aeb6de9", 00:22:37.847 "assigned_rate_limits": { 00:22:37.847 "rw_ios_per_sec": 0, 00:22:37.847 "rw_mbytes_per_sec": 0, 00:22:37.847 "r_mbytes_per_sec": 0, 00:22:37.847 "w_mbytes_per_sec": 0 00:22:37.847 }, 00:22:37.847 "claimed": true, 00:22:37.847 "claim_type": "exclusive_write", 00:22:37.847 "zoned": false, 00:22:37.847 "supported_io_types": { 00:22:37.847 "read": true, 00:22:37.847 "write": true, 00:22:37.847 "unmap": true, 00:22:37.847 "flush": true, 00:22:37.847 "reset": true, 00:22:37.847 "nvme_admin": false, 00:22:37.847 "nvme_io": false, 00:22:37.847 "nvme_io_md": false, 00:22:37.847 "write_zeroes": true, 00:22:37.847 "zcopy": true, 00:22:37.847 "get_zone_info": false, 00:22:37.847 "zone_management": false, 00:22:37.847 "zone_append": false, 00:22:37.847 "compare": false, 00:22:37.847 "compare_and_write": false, 00:22:37.847 "abort": true, 00:22:37.847 "seek_hole": false, 00:22:37.847 "seek_data": false, 00:22:37.847 "copy": true, 00:22:37.847 "nvme_iov_md": false 00:22:37.847 }, 00:22:37.847 "memory_domains": [ 00:22:37.847 { 00:22:37.847 "dma_device_id": "system", 00:22:37.847 "dma_device_type": 1 00:22:37.847 }, 00:22:37.847 { 00:22:37.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:37.847 "dma_device_type": 2 00:22:37.847 } 00:22:37.847 ], 00:22:37.847 "driver_specific": {} 00:22:37.847 } 00:22:37.847 ] 00:22:37.847 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.847 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:22:37.847 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:37.847 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:37.847 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:37.847 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:37.847 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:37.848 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:37.848 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:37.848 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:37.848 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:37.848 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:37.848 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:37.848 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.848 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.848 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:37.848 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.848 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:37.848 "name": "Existed_Raid", 00:22:37.848 "uuid": "c6bf32ae-89ee-41d2-bb0f-1a617faadcd1", 00:22:37.848 "strip_size_kb": 0, 00:22:37.848 "state": "configuring", 00:22:37.848 "raid_level": "raid1", 00:22:37.848 "superblock": true, 00:22:37.848 "num_base_bdevs": 2, 00:22:37.848 "num_base_bdevs_discovered": 1, 00:22:37.848 "num_base_bdevs_operational": 2, 00:22:37.848 "base_bdevs_list": [ 00:22:37.848 { 00:22:37.848 "name": "BaseBdev1", 00:22:37.848 "uuid": "1a635ed9-da70-44ce-99ea-80bf7aeb6de9", 00:22:37.848 "is_configured": true, 00:22:37.848 "data_offset": 256, 00:22:37.848 "data_size": 7936 00:22:37.848 }, 00:22:37.848 { 00:22:37.848 "name": "BaseBdev2", 00:22:37.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.848 "is_configured": false, 00:22:37.848 "data_offset": 0, 00:22:37.848 "data_size": 0 00:22:37.848 } 00:22:37.848 ] 00:22:37.848 }' 00:22:37.848 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:37.848 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.416 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:38.416 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.416 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.416 [2024-12-06 06:48:56.775458] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:38.416 [2024-12-06 06:48:56.775519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:38.416 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.416 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:38.416 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.416 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.416 [2024-12-06 06:48:56.787487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:38.416 [2024-12-06 06:48:56.790051] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:38.416 [2024-12-06 06:48:56.790262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:38.416 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.416 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:38.416 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:38.416 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:38.416 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:38.416 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:38.416 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:38.416 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:38.416 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:38.416 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:38.416 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:38.416 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:38.416 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:38.416 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.416 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:38.416 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.416 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.416 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.416 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:38.416 "name": "Existed_Raid", 00:22:38.416 "uuid": "ebc3db5f-bec6-4b70-a8a2-27d141a835b3", 00:22:38.416 "strip_size_kb": 0, 00:22:38.416 "state": "configuring", 00:22:38.416 "raid_level": "raid1", 00:22:38.416 "superblock": true, 00:22:38.416 "num_base_bdevs": 2, 00:22:38.416 "num_base_bdevs_discovered": 1, 00:22:38.416 "num_base_bdevs_operational": 2, 00:22:38.416 "base_bdevs_list": [ 00:22:38.416 { 00:22:38.416 "name": "BaseBdev1", 00:22:38.416 "uuid": "1a635ed9-da70-44ce-99ea-80bf7aeb6de9", 00:22:38.416 "is_configured": true, 00:22:38.416 "data_offset": 256, 00:22:38.416 "data_size": 7936 00:22:38.416 }, 00:22:38.416 { 00:22:38.416 "name": "BaseBdev2", 00:22:38.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.416 "is_configured": false, 00:22:38.416 "data_offset": 0, 00:22:38.416 "data_size": 0 00:22:38.416 } 00:22:38.416 ] 00:22:38.416 }' 00:22:38.416 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:38.416 06:48:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.675 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:22:38.675 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.675 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.934 [2024-12-06 06:48:57.353231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:38.934 [2024-12-06 06:48:57.353567] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:38.934 [2024-12-06 06:48:57.353587] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:38.934 BaseBdev2 00:22:38.934 [2024-12-06 06:48:57.353920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:38.934 [2024-12-06 06:48:57.354126] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:38.934 [2024-12-06 06:48:57.354156] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:38.934 [2024-12-06 06:48:57.354327] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.934 [ 00:22:38.934 { 00:22:38.934 "name": "BaseBdev2", 00:22:38.934 "aliases": [ 00:22:38.934 "2c490e86-284b-44cd-b25f-9a3c2c4712f6" 00:22:38.934 ], 00:22:38.934 "product_name": "Malloc disk", 00:22:38.934 "block_size": 4096, 00:22:38.934 "num_blocks": 8192, 00:22:38.934 "uuid": "2c490e86-284b-44cd-b25f-9a3c2c4712f6", 00:22:38.934 "assigned_rate_limits": { 00:22:38.934 "rw_ios_per_sec": 0, 00:22:38.934 "rw_mbytes_per_sec": 0, 00:22:38.934 "r_mbytes_per_sec": 0, 00:22:38.934 "w_mbytes_per_sec": 0 00:22:38.934 }, 00:22:38.934 "claimed": true, 00:22:38.934 "claim_type": "exclusive_write", 00:22:38.934 "zoned": false, 00:22:38.934 "supported_io_types": { 00:22:38.934 "read": true, 00:22:38.934 "write": true, 00:22:38.934 "unmap": true, 00:22:38.934 "flush": true, 00:22:38.934 "reset": true, 00:22:38.934 "nvme_admin": false, 00:22:38.934 "nvme_io": false, 00:22:38.934 "nvme_io_md": false, 00:22:38.934 "write_zeroes": true, 00:22:38.934 "zcopy": true, 00:22:38.934 "get_zone_info": false, 00:22:38.934 "zone_management": false, 00:22:38.934 "zone_append": false, 00:22:38.934 "compare": false, 00:22:38.934 "compare_and_write": false, 00:22:38.934 "abort": true, 00:22:38.934 "seek_hole": false, 00:22:38.934 "seek_data": false, 00:22:38.934 "copy": true, 00:22:38.934 "nvme_iov_md": false 00:22:38.934 }, 00:22:38.934 "memory_domains": [ 00:22:38.934 { 00:22:38.934 "dma_device_id": "system", 00:22:38.934 "dma_device_type": 1 00:22:38.934 }, 00:22:38.934 { 00:22:38.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:38.934 "dma_device_type": 2 00:22:38.934 } 00:22:38.934 ], 00:22:38.934 "driver_specific": {} 00:22:38.934 } 00:22:38.934 ] 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:38.934 "name": "Existed_Raid", 00:22:38.934 "uuid": "ebc3db5f-bec6-4b70-a8a2-27d141a835b3", 00:22:38.934 "strip_size_kb": 0, 00:22:38.934 "state": "online", 00:22:38.934 "raid_level": "raid1", 00:22:38.934 "superblock": true, 00:22:38.934 "num_base_bdevs": 2, 00:22:38.934 "num_base_bdevs_discovered": 2, 00:22:38.934 "num_base_bdevs_operational": 2, 00:22:38.934 "base_bdevs_list": [ 00:22:38.934 { 00:22:38.934 "name": "BaseBdev1", 00:22:38.934 "uuid": "1a635ed9-da70-44ce-99ea-80bf7aeb6de9", 00:22:38.934 "is_configured": true, 00:22:38.934 "data_offset": 256, 00:22:38.934 "data_size": 7936 00:22:38.934 }, 00:22:38.934 { 00:22:38.934 "name": "BaseBdev2", 00:22:38.934 "uuid": "2c490e86-284b-44cd-b25f-9a3c2c4712f6", 00:22:38.934 "is_configured": true, 00:22:38.934 "data_offset": 256, 00:22:38.934 "data_size": 7936 00:22:38.934 } 00:22:38.934 ] 00:22:38.934 }' 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:38.934 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:39.503 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:39.503 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:39.503 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:39.503 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:39.503 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:22:39.503 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:39.503 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:39.503 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:39.503 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.503 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:39.503 [2024-12-06 06:48:57.933901] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:39.503 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.503 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:39.503 "name": "Existed_Raid", 00:22:39.503 "aliases": [ 00:22:39.503 "ebc3db5f-bec6-4b70-a8a2-27d141a835b3" 00:22:39.503 ], 00:22:39.503 "product_name": "Raid Volume", 00:22:39.503 "block_size": 4096, 00:22:39.503 "num_blocks": 7936, 00:22:39.503 "uuid": "ebc3db5f-bec6-4b70-a8a2-27d141a835b3", 00:22:39.503 "assigned_rate_limits": { 00:22:39.503 "rw_ios_per_sec": 0, 00:22:39.503 "rw_mbytes_per_sec": 0, 00:22:39.503 "r_mbytes_per_sec": 0, 00:22:39.503 "w_mbytes_per_sec": 0 00:22:39.503 }, 00:22:39.503 "claimed": false, 00:22:39.503 "zoned": false, 00:22:39.503 "supported_io_types": { 00:22:39.503 "read": true, 00:22:39.503 "write": true, 00:22:39.503 "unmap": false, 00:22:39.503 "flush": false, 00:22:39.503 "reset": true, 00:22:39.503 "nvme_admin": false, 00:22:39.503 "nvme_io": false, 00:22:39.503 "nvme_io_md": false, 00:22:39.503 "write_zeroes": true, 00:22:39.503 "zcopy": false, 00:22:39.503 "get_zone_info": false, 00:22:39.503 "zone_management": false, 00:22:39.503 "zone_append": false, 00:22:39.503 "compare": false, 00:22:39.503 "compare_and_write": false, 00:22:39.503 "abort": false, 00:22:39.503 "seek_hole": false, 00:22:39.503 "seek_data": false, 00:22:39.503 "copy": false, 00:22:39.503 "nvme_iov_md": false 00:22:39.503 }, 00:22:39.503 "memory_domains": [ 00:22:39.503 { 00:22:39.503 "dma_device_id": "system", 00:22:39.503 "dma_device_type": 1 00:22:39.503 }, 00:22:39.503 { 00:22:39.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:39.503 "dma_device_type": 2 00:22:39.503 }, 00:22:39.503 { 00:22:39.503 "dma_device_id": "system", 00:22:39.503 "dma_device_type": 1 00:22:39.503 }, 00:22:39.503 { 00:22:39.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:39.503 "dma_device_type": 2 00:22:39.503 } 00:22:39.503 ], 00:22:39.503 "driver_specific": { 00:22:39.503 "raid": { 00:22:39.503 "uuid": "ebc3db5f-bec6-4b70-a8a2-27d141a835b3", 00:22:39.503 "strip_size_kb": 0, 00:22:39.503 "state": "online", 00:22:39.503 "raid_level": "raid1", 00:22:39.503 "superblock": true, 00:22:39.503 "num_base_bdevs": 2, 00:22:39.503 "num_base_bdevs_discovered": 2, 00:22:39.503 "num_base_bdevs_operational": 2, 00:22:39.503 "base_bdevs_list": [ 00:22:39.503 { 00:22:39.503 "name": "BaseBdev1", 00:22:39.503 "uuid": "1a635ed9-da70-44ce-99ea-80bf7aeb6de9", 00:22:39.503 "is_configured": true, 00:22:39.503 "data_offset": 256, 00:22:39.503 "data_size": 7936 00:22:39.503 }, 00:22:39.503 { 00:22:39.503 "name": "BaseBdev2", 00:22:39.503 "uuid": "2c490e86-284b-44cd-b25f-9a3c2c4712f6", 00:22:39.503 "is_configured": true, 00:22:39.503 "data_offset": 256, 00:22:39.503 "data_size": 7936 00:22:39.503 } 00:22:39.503 ] 00:22:39.503 } 00:22:39.503 } 00:22:39.503 }' 00:22:39.503 06:48:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:39.503 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:39.503 BaseBdev2' 00:22:39.503 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:39.503 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:22:39.503 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:39.503 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:39.503 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:39.503 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.503 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:39.503 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.762 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:39.762 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:39.762 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:39.762 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:39.762 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.762 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:39.762 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:39.762 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.762 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:39.762 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:39.762 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:39.762 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.762 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:39.762 [2024-12-06 06:48:58.205618] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:39.762 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.762 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:39.762 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:22:39.762 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:39.762 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:22:39.762 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:39.762 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:22:39.762 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:39.762 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:39.762 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:39.762 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:39.763 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:39.763 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:39.763 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:39.763 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:39.763 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:39.763 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:39.763 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:39.763 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.763 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:39.763 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.763 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:39.763 "name": "Existed_Raid", 00:22:39.763 "uuid": "ebc3db5f-bec6-4b70-a8a2-27d141a835b3", 00:22:39.763 "strip_size_kb": 0, 00:22:39.763 "state": "online", 00:22:39.763 "raid_level": "raid1", 00:22:39.763 "superblock": true, 00:22:39.763 "num_base_bdevs": 2, 00:22:39.763 "num_base_bdevs_discovered": 1, 00:22:39.763 "num_base_bdevs_operational": 1, 00:22:39.763 "base_bdevs_list": [ 00:22:39.763 { 00:22:39.763 "name": null, 00:22:39.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.763 "is_configured": false, 00:22:39.763 "data_offset": 0, 00:22:39.763 "data_size": 7936 00:22:39.763 }, 00:22:39.763 { 00:22:39.763 "name": "BaseBdev2", 00:22:39.763 "uuid": "2c490e86-284b-44cd-b25f-9a3c2c4712f6", 00:22:39.763 "is_configured": true, 00:22:39.763 "data_offset": 256, 00:22:39.763 "data_size": 7936 00:22:39.763 } 00:22:39.763 ] 00:22:39.763 }' 00:22:39.763 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:39.763 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:40.332 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:40.332 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:40.332 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:40.332 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:40.332 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.332 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:40.332 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.332 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:40.332 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:40.332 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:40.332 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.332 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:40.332 [2024-12-06 06:48:58.872950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:40.332 [2024-12-06 06:48:58.873080] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:40.332 [2024-12-06 06:48:58.962667] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:40.332 [2024-12-06 06:48:58.962960] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:40.332 [2024-12-06 06:48:58.963193] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:40.332 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.332 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:40.332 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:40.332 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:40.332 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:40.332 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.332 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:40.591 06:48:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.591 06:48:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:40.591 06:48:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:40.591 06:48:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:22:40.591 06:48:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86526 00:22:40.591 06:48:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86526 ']' 00:22:40.591 06:48:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86526 00:22:40.591 06:48:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:22:40.591 06:48:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:40.591 06:48:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86526 00:22:40.591 killing process with pid 86526 00:22:40.591 06:48:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:40.591 06:48:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:40.591 06:48:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86526' 00:22:40.591 06:48:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86526 00:22:40.591 [2024-12-06 06:48:59.054561] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:40.591 06:48:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86526 00:22:40.591 [2024-12-06 06:48:59.069493] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:41.528 06:49:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:22:41.529 00:22:41.529 real 0m5.571s 00:22:41.529 user 0m8.387s 00:22:41.529 sys 0m0.819s 00:22:41.529 06:49:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:41.529 06:49:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:41.529 ************************************ 00:22:41.529 END TEST raid_state_function_test_sb_4k 00:22:41.529 ************************************ 00:22:41.788 06:49:00 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:22:41.788 06:49:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:41.788 06:49:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:41.788 06:49:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:41.788 ************************************ 00:22:41.788 START TEST raid_superblock_test_4k 00:22:41.788 ************************************ 00:22:41.788 06:49:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:22:41.788 06:49:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:22:41.788 06:49:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:22:41.788 06:49:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:41.788 06:49:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:41.788 06:49:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:41.788 06:49:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:41.788 06:49:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:41.788 06:49:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:41.788 06:49:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:41.788 06:49:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:41.788 06:49:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:41.788 06:49:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:41.788 06:49:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:41.788 06:49:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:22:41.788 06:49:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:22:41.788 06:49:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86774 00:22:41.788 06:49:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86774 00:22:41.788 06:49:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:41.788 06:49:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86774 ']' 00:22:41.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.788 06:49:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.788 06:49:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:41.788 06:49:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.788 06:49:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:41.788 06:49:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:41.788 [2024-12-06 06:49:00.324313] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:22:41.788 [2024-12-06 06:49:00.324489] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86774 ] 00:22:42.050 [2024-12-06 06:49:00.515067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.050 [2024-12-06 06:49:00.674428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.309 [2024-12-06 06:49:00.925502] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:42.309 [2024-12-06 06:49:00.925576] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:42.876 malloc1 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:42.876 [2024-12-06 06:49:01.350021] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:42.876 [2024-12-06 06:49:01.350275] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.876 [2024-12-06 06:49:01.350353] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:42.876 [2024-12-06 06:49:01.350510] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.876 [2024-12-06 06:49:01.353638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.876 [2024-12-06 06:49:01.353809] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:42.876 pt1 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:42.876 malloc2 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:42.876 [2024-12-06 06:49:01.406454] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:42.876 [2024-12-06 06:49:01.406543] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.876 [2024-12-06 06:49:01.406593] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:42.876 [2024-12-06 06:49:01.406625] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.876 [2024-12-06 06:49:01.409551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.876 [2024-12-06 06:49:01.409595] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:42.876 pt2 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:42.876 [2024-12-06 06:49:01.418571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:42.876 [2024-12-06 06:49:01.421379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:42.876 [2024-12-06 06:49:01.421618] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:42.876 [2024-12-06 06:49:01.421648] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:42.876 [2024-12-06 06:49:01.421976] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:42.876 [2024-12-06 06:49:01.422189] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:42.876 [2024-12-06 06:49:01.422214] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:42.876 [2024-12-06 06:49:01.422435] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:42.876 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:42.877 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:42.877 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:42.877 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:42.877 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:42.877 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:42.877 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:42.877 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.877 06:49:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.877 06:49:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:42.877 06:49:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.877 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:42.877 "name": "raid_bdev1", 00:22:42.877 "uuid": "1cbb5d03-be9d-434e-b5eb-39196368eacf", 00:22:42.877 "strip_size_kb": 0, 00:22:42.877 "state": "online", 00:22:42.877 "raid_level": "raid1", 00:22:42.877 "superblock": true, 00:22:42.877 "num_base_bdevs": 2, 00:22:42.877 "num_base_bdevs_discovered": 2, 00:22:42.877 "num_base_bdevs_operational": 2, 00:22:42.877 "base_bdevs_list": [ 00:22:42.877 { 00:22:42.877 "name": "pt1", 00:22:42.877 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:42.877 "is_configured": true, 00:22:42.877 "data_offset": 256, 00:22:42.877 "data_size": 7936 00:22:42.877 }, 00:22:42.877 { 00:22:42.877 "name": "pt2", 00:22:42.877 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:42.877 "is_configured": true, 00:22:42.877 "data_offset": 256, 00:22:42.877 "data_size": 7936 00:22:42.877 } 00:22:42.877 ] 00:22:42.877 }' 00:22:42.877 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:42.877 06:49:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.444 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:43.444 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:43.444 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:43.445 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:43.445 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:22:43.445 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:43.445 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:43.445 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:43.445 06:49:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.445 06:49:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.445 [2024-12-06 06:49:01.931077] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:43.445 06:49:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.445 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:43.445 "name": "raid_bdev1", 00:22:43.445 "aliases": [ 00:22:43.445 "1cbb5d03-be9d-434e-b5eb-39196368eacf" 00:22:43.445 ], 00:22:43.445 "product_name": "Raid Volume", 00:22:43.445 "block_size": 4096, 00:22:43.445 "num_blocks": 7936, 00:22:43.445 "uuid": "1cbb5d03-be9d-434e-b5eb-39196368eacf", 00:22:43.445 "assigned_rate_limits": { 00:22:43.445 "rw_ios_per_sec": 0, 00:22:43.445 "rw_mbytes_per_sec": 0, 00:22:43.445 "r_mbytes_per_sec": 0, 00:22:43.445 "w_mbytes_per_sec": 0 00:22:43.445 }, 00:22:43.445 "claimed": false, 00:22:43.445 "zoned": false, 00:22:43.445 "supported_io_types": { 00:22:43.445 "read": true, 00:22:43.445 "write": true, 00:22:43.445 "unmap": false, 00:22:43.445 "flush": false, 00:22:43.445 "reset": true, 00:22:43.445 "nvme_admin": false, 00:22:43.445 "nvme_io": false, 00:22:43.445 "nvme_io_md": false, 00:22:43.445 "write_zeroes": true, 00:22:43.445 "zcopy": false, 00:22:43.445 "get_zone_info": false, 00:22:43.445 "zone_management": false, 00:22:43.445 "zone_append": false, 00:22:43.445 "compare": false, 00:22:43.445 "compare_and_write": false, 00:22:43.445 "abort": false, 00:22:43.445 "seek_hole": false, 00:22:43.445 "seek_data": false, 00:22:43.445 "copy": false, 00:22:43.445 "nvme_iov_md": false 00:22:43.445 }, 00:22:43.445 "memory_domains": [ 00:22:43.445 { 00:22:43.445 "dma_device_id": "system", 00:22:43.445 "dma_device_type": 1 00:22:43.445 }, 00:22:43.445 { 00:22:43.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.445 "dma_device_type": 2 00:22:43.445 }, 00:22:43.445 { 00:22:43.445 "dma_device_id": "system", 00:22:43.445 "dma_device_type": 1 00:22:43.445 }, 00:22:43.445 { 00:22:43.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.445 "dma_device_type": 2 00:22:43.445 } 00:22:43.445 ], 00:22:43.445 "driver_specific": { 00:22:43.445 "raid": { 00:22:43.445 "uuid": "1cbb5d03-be9d-434e-b5eb-39196368eacf", 00:22:43.445 "strip_size_kb": 0, 00:22:43.445 "state": "online", 00:22:43.445 "raid_level": "raid1", 00:22:43.445 "superblock": true, 00:22:43.445 "num_base_bdevs": 2, 00:22:43.445 "num_base_bdevs_discovered": 2, 00:22:43.445 "num_base_bdevs_operational": 2, 00:22:43.445 "base_bdevs_list": [ 00:22:43.445 { 00:22:43.445 "name": "pt1", 00:22:43.445 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:43.445 "is_configured": true, 00:22:43.445 "data_offset": 256, 00:22:43.445 "data_size": 7936 00:22:43.445 }, 00:22:43.445 { 00:22:43.445 "name": "pt2", 00:22:43.445 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:43.445 "is_configured": true, 00:22:43.445 "data_offset": 256, 00:22:43.445 "data_size": 7936 00:22:43.445 } 00:22:43.445 ] 00:22:43.445 } 00:22:43.445 } 00:22:43.445 }' 00:22:43.445 06:49:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:43.445 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:43.445 pt2' 00:22:43.445 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:43.445 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:22:43.445 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:43.445 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:43.445 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:43.445 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.445 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:43.742 [2024-12-06 06:49:02.191097] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1cbb5d03-be9d-434e-b5eb-39196368eacf 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 1cbb5d03-be9d-434e-b5eb-39196368eacf ']' 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.742 [2024-12-06 06:49:02.242710] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:43.742 [2024-12-06 06:49:02.242737] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:43.742 [2024-12-06 06:49:02.242829] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:43.742 [2024-12-06 06:49:02.242907] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:43.742 [2024-12-06 06:49:02.242927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.742 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.742 [2024-12-06 06:49:02.378810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:43.742 [2024-12-06 06:49:02.381545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:43.742 [2024-12-06 06:49:02.381669] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:43.742 [2024-12-06 06:49:02.381763] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:43.742 [2024-12-06 06:49:02.381790] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:43.742 [2024-12-06 06:49:02.381807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:43.742 request: 00:22:43.742 { 00:22:43.742 "name": "raid_bdev1", 00:22:43.742 "raid_level": "raid1", 00:22:43.742 "base_bdevs": [ 00:22:43.742 "malloc1", 00:22:43.742 "malloc2" 00:22:44.001 ], 00:22:44.001 "superblock": false, 00:22:44.001 "method": "bdev_raid_create", 00:22:44.001 "req_id": 1 00:22:44.001 } 00:22:44.001 Got JSON-RPC error response 00:22:44.001 response: 00:22:44.001 { 00:22:44.001 "code": -17, 00:22:44.001 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:44.001 } 00:22:44.001 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:44.001 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:22:44.001 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:44.001 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:44.001 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:44.001 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.002 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.002 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:44.002 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:44.002 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.002 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:44.002 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:44.002 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:44.002 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.002 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:44.002 [2024-12-06 06:49:02.438813] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:44.002 [2024-12-06 06:49:02.439008] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:44.002 [2024-12-06 06:49:02.439081] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:44.002 [2024-12-06 06:49:02.439299] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:44.002 [2024-12-06 06:49:02.442562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:44.002 [2024-12-06 06:49:02.442743] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:44.002 [2024-12-06 06:49:02.442964] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:44.002 [2024-12-06 06:49:02.443156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:44.002 pt1 00:22:44.002 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.002 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:22:44.002 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:44.002 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:44.002 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:44.002 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:44.002 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:44.002 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:44.002 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:44.002 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:44.002 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:44.002 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.002 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.002 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.002 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:44.002 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.002 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:44.002 "name": "raid_bdev1", 00:22:44.002 "uuid": "1cbb5d03-be9d-434e-b5eb-39196368eacf", 00:22:44.002 "strip_size_kb": 0, 00:22:44.002 "state": "configuring", 00:22:44.002 "raid_level": "raid1", 00:22:44.002 "superblock": true, 00:22:44.002 "num_base_bdevs": 2, 00:22:44.002 "num_base_bdevs_discovered": 1, 00:22:44.002 "num_base_bdevs_operational": 2, 00:22:44.002 "base_bdevs_list": [ 00:22:44.002 { 00:22:44.002 "name": "pt1", 00:22:44.002 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:44.002 "is_configured": true, 00:22:44.002 "data_offset": 256, 00:22:44.002 "data_size": 7936 00:22:44.002 }, 00:22:44.002 { 00:22:44.002 "name": null, 00:22:44.002 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:44.002 "is_configured": false, 00:22:44.002 "data_offset": 256, 00:22:44.002 "data_size": 7936 00:22:44.002 } 00:22:44.002 ] 00:22:44.002 }' 00:22:44.002 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:44.002 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:44.570 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:22:44.570 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:44.570 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:44.570 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:44.570 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.570 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:44.570 [2024-12-06 06:49:02.943240] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:44.570 [2024-12-06 06:49:02.943353] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:44.570 [2024-12-06 06:49:02.943387] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:44.570 [2024-12-06 06:49:02.943406] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:44.570 [2024-12-06 06:49:02.943992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:44.570 [2024-12-06 06:49:02.944041] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:44.570 [2024-12-06 06:49:02.944144] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:44.570 [2024-12-06 06:49:02.944197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:44.570 [2024-12-06 06:49:02.944359] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:44.570 [2024-12-06 06:49:02.944380] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:44.570 [2024-12-06 06:49:02.944703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:44.570 [2024-12-06 06:49:02.944901] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:44.570 [2024-12-06 06:49:02.944916] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:44.570 [2024-12-06 06:49:02.945142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:44.570 pt2 00:22:44.570 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.570 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:44.570 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:44.570 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:44.570 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:44.570 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:44.570 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:44.570 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:44.570 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:44.570 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:44.570 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:44.570 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:44.570 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:44.570 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.570 06:49:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.571 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.571 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:44.571 06:49:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.571 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:44.571 "name": "raid_bdev1", 00:22:44.571 "uuid": "1cbb5d03-be9d-434e-b5eb-39196368eacf", 00:22:44.571 "strip_size_kb": 0, 00:22:44.571 "state": "online", 00:22:44.571 "raid_level": "raid1", 00:22:44.571 "superblock": true, 00:22:44.571 "num_base_bdevs": 2, 00:22:44.571 "num_base_bdevs_discovered": 2, 00:22:44.571 "num_base_bdevs_operational": 2, 00:22:44.571 "base_bdevs_list": [ 00:22:44.571 { 00:22:44.571 "name": "pt1", 00:22:44.571 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:44.571 "is_configured": true, 00:22:44.571 "data_offset": 256, 00:22:44.571 "data_size": 7936 00:22:44.571 }, 00:22:44.571 { 00:22:44.571 "name": "pt2", 00:22:44.571 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:44.571 "is_configured": true, 00:22:44.571 "data_offset": 256, 00:22:44.571 "data_size": 7936 00:22:44.571 } 00:22:44.571 ] 00:22:44.571 }' 00:22:44.571 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:44.571 06:49:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:45.137 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:45.137 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:45.137 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:45.137 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:45.137 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:22:45.137 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:45.137 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:45.137 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:45.137 06:49:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.137 06:49:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:45.137 [2024-12-06 06:49:03.499705] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:45.137 06:49:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.137 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:45.137 "name": "raid_bdev1", 00:22:45.137 "aliases": [ 00:22:45.137 "1cbb5d03-be9d-434e-b5eb-39196368eacf" 00:22:45.137 ], 00:22:45.137 "product_name": "Raid Volume", 00:22:45.137 "block_size": 4096, 00:22:45.137 "num_blocks": 7936, 00:22:45.137 "uuid": "1cbb5d03-be9d-434e-b5eb-39196368eacf", 00:22:45.137 "assigned_rate_limits": { 00:22:45.137 "rw_ios_per_sec": 0, 00:22:45.137 "rw_mbytes_per_sec": 0, 00:22:45.137 "r_mbytes_per_sec": 0, 00:22:45.137 "w_mbytes_per_sec": 0 00:22:45.137 }, 00:22:45.137 "claimed": false, 00:22:45.137 "zoned": false, 00:22:45.137 "supported_io_types": { 00:22:45.137 "read": true, 00:22:45.137 "write": true, 00:22:45.137 "unmap": false, 00:22:45.137 "flush": false, 00:22:45.137 "reset": true, 00:22:45.137 "nvme_admin": false, 00:22:45.137 "nvme_io": false, 00:22:45.137 "nvme_io_md": false, 00:22:45.137 "write_zeroes": true, 00:22:45.137 "zcopy": false, 00:22:45.137 "get_zone_info": false, 00:22:45.137 "zone_management": false, 00:22:45.137 "zone_append": false, 00:22:45.137 "compare": false, 00:22:45.137 "compare_and_write": false, 00:22:45.137 "abort": false, 00:22:45.137 "seek_hole": false, 00:22:45.137 "seek_data": false, 00:22:45.137 "copy": false, 00:22:45.137 "nvme_iov_md": false 00:22:45.137 }, 00:22:45.137 "memory_domains": [ 00:22:45.137 { 00:22:45.137 "dma_device_id": "system", 00:22:45.137 "dma_device_type": 1 00:22:45.137 }, 00:22:45.137 { 00:22:45.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:45.137 "dma_device_type": 2 00:22:45.137 }, 00:22:45.137 { 00:22:45.137 "dma_device_id": "system", 00:22:45.137 "dma_device_type": 1 00:22:45.137 }, 00:22:45.137 { 00:22:45.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:45.137 "dma_device_type": 2 00:22:45.137 } 00:22:45.137 ], 00:22:45.137 "driver_specific": { 00:22:45.137 "raid": { 00:22:45.137 "uuid": "1cbb5d03-be9d-434e-b5eb-39196368eacf", 00:22:45.137 "strip_size_kb": 0, 00:22:45.137 "state": "online", 00:22:45.137 "raid_level": "raid1", 00:22:45.137 "superblock": true, 00:22:45.137 "num_base_bdevs": 2, 00:22:45.137 "num_base_bdevs_discovered": 2, 00:22:45.137 "num_base_bdevs_operational": 2, 00:22:45.137 "base_bdevs_list": [ 00:22:45.137 { 00:22:45.137 "name": "pt1", 00:22:45.137 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:45.137 "is_configured": true, 00:22:45.137 "data_offset": 256, 00:22:45.137 "data_size": 7936 00:22:45.137 }, 00:22:45.137 { 00:22:45.137 "name": "pt2", 00:22:45.137 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:45.137 "is_configured": true, 00:22:45.137 "data_offset": 256, 00:22:45.137 "data_size": 7936 00:22:45.137 } 00:22:45.137 ] 00:22:45.137 } 00:22:45.137 } 00:22:45.137 }' 00:22:45.138 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:45.138 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:45.138 pt2' 00:22:45.138 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:45.138 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:22:45.138 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:45.138 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:45.138 06:49:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.138 06:49:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:45.138 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:45.138 06:49:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.138 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:45.138 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:45.138 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:45.138 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:45.138 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:45.138 06:49:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.138 06:49:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:45.138 06:49:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.138 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:45.138 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:45.138 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:45.138 06:49:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.138 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:45.138 06:49:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:45.138 [2024-12-06 06:49:03.767807] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:45.397 06:49:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.397 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 1cbb5d03-be9d-434e-b5eb-39196368eacf '!=' 1cbb5d03-be9d-434e-b5eb-39196368eacf ']' 00:22:45.397 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:22:45.397 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:45.397 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:22:45.397 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:22:45.397 06:49:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.397 06:49:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:45.397 [2024-12-06 06:49:03.819556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:45.397 06:49:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.397 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:45.397 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:45.397 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:45.397 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:45.397 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:45.397 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:45.397 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:45.397 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:45.397 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:45.397 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:45.397 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.397 06:49:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.397 06:49:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:45.397 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.397 06:49:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.397 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:45.397 "name": "raid_bdev1", 00:22:45.397 "uuid": "1cbb5d03-be9d-434e-b5eb-39196368eacf", 00:22:45.397 "strip_size_kb": 0, 00:22:45.397 "state": "online", 00:22:45.397 "raid_level": "raid1", 00:22:45.397 "superblock": true, 00:22:45.397 "num_base_bdevs": 2, 00:22:45.397 "num_base_bdevs_discovered": 1, 00:22:45.397 "num_base_bdevs_operational": 1, 00:22:45.397 "base_bdevs_list": [ 00:22:45.397 { 00:22:45.397 "name": null, 00:22:45.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.397 "is_configured": false, 00:22:45.397 "data_offset": 0, 00:22:45.397 "data_size": 7936 00:22:45.397 }, 00:22:45.397 { 00:22:45.397 "name": "pt2", 00:22:45.397 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:45.397 "is_configured": true, 00:22:45.397 "data_offset": 256, 00:22:45.397 "data_size": 7936 00:22:45.397 } 00:22:45.397 ] 00:22:45.397 }' 00:22:45.397 06:49:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:45.397 06:49:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:45.964 [2024-12-06 06:49:04.347687] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:45.964 [2024-12-06 06:49:04.347851] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:45.964 [2024-12-06 06:49:04.347981] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:45.964 [2024-12-06 06:49:04.348054] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:45.964 [2024-12-06 06:49:04.348077] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:45.964 [2024-12-06 06:49:04.415658] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:45.964 [2024-12-06 06:49:04.415868] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:45.964 [2024-12-06 06:49:04.416047] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:45.964 [2024-12-06 06:49:04.416179] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:45.964 [2024-12-06 06:49:04.419176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:45.964 [2024-12-06 06:49:04.419359] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:45.964 [2024-12-06 06:49:04.419590] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:45.964 [2024-12-06 06:49:04.419791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:45.964 [2024-12-06 06:49:04.420052] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:45.964 [2024-12-06 06:49:04.420086] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:45.964 [2024-12-06 06:49:04.420388] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:45.964 [2024-12-06 06:49:04.420618] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:45.964 [2024-12-06 06:49:04.420635] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:45.964 pt2 00:22:45.964 [2024-12-06 06:49:04.420863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.964 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:45.964 "name": "raid_bdev1", 00:22:45.964 "uuid": "1cbb5d03-be9d-434e-b5eb-39196368eacf", 00:22:45.964 "strip_size_kb": 0, 00:22:45.964 "state": "online", 00:22:45.964 "raid_level": "raid1", 00:22:45.964 "superblock": true, 00:22:45.964 "num_base_bdevs": 2, 00:22:45.964 "num_base_bdevs_discovered": 1, 00:22:45.964 "num_base_bdevs_operational": 1, 00:22:45.964 "base_bdevs_list": [ 00:22:45.964 { 00:22:45.964 "name": null, 00:22:45.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.964 "is_configured": false, 00:22:45.964 "data_offset": 256, 00:22:45.964 "data_size": 7936 00:22:45.964 }, 00:22:45.964 { 00:22:45.964 "name": "pt2", 00:22:45.964 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:45.964 "is_configured": true, 00:22:45.964 "data_offset": 256, 00:22:45.964 "data_size": 7936 00:22:45.964 } 00:22:45.965 ] 00:22:45.965 }' 00:22:45.965 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:45.965 06:49:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:46.534 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:46.534 06:49:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.534 06:49:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:46.534 [2024-12-06 06:49:04.956353] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:46.534 [2024-12-06 06:49:04.956551] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:46.534 [2024-12-06 06:49:04.956673] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:46.534 [2024-12-06 06:49:04.956755] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:46.534 [2024-12-06 06:49:04.956772] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:46.534 06:49:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.534 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:46.534 06:49:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.534 06:49:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:46.534 06:49:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:22:46.534 06:49:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.534 06:49:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:22:46.534 06:49:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:22:46.534 06:49:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:22:46.534 06:49:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:46.534 06:49:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.534 06:49:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:46.534 [2024-12-06 06:49:05.016411] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:46.534 [2024-12-06 06:49:05.016657] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:46.534 [2024-12-06 06:49:05.016817] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:22:46.534 [2024-12-06 06:49:05.016969] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:46.534 [2024-12-06 06:49:05.019995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:46.534 pt1 00:22:46.534 [2024-12-06 06:49:05.020186] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:46.534 [2024-12-06 06:49:05.020323] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:46.534 [2024-12-06 06:49:05.020396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:46.534 [2024-12-06 06:49:05.020672] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:46.534 [2024-12-06 06:49:05.020693] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:46.534 [2024-12-06 06:49:05.020719] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:22:46.534 [2024-12-06 06:49:05.020794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:46.534 [2024-12-06 06:49:05.020909] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:22:46.534 [2024-12-06 06:49:05.020926] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:46.534 [2024-12-06 06:49:05.021250] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:46.534 06:49:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.534 [2024-12-06 06:49:05.021447] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:22:46.534 [2024-12-06 06:49:05.021470] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:22:46.534 06:49:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:22:46.534 [2024-12-06 06:49:05.021688] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:46.534 06:49:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:46.534 06:49:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:46.534 06:49:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:46.534 06:49:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:46.534 06:49:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:46.534 06:49:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:46.534 06:49:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:46.534 06:49:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:46.534 06:49:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:46.534 06:49:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:46.534 06:49:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:46.534 06:49:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.534 06:49:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:46.534 06:49:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:46.534 06:49:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.534 06:49:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:46.534 "name": "raid_bdev1", 00:22:46.534 "uuid": "1cbb5d03-be9d-434e-b5eb-39196368eacf", 00:22:46.534 "strip_size_kb": 0, 00:22:46.534 "state": "online", 00:22:46.534 "raid_level": "raid1", 00:22:46.534 "superblock": true, 00:22:46.534 "num_base_bdevs": 2, 00:22:46.534 "num_base_bdevs_discovered": 1, 00:22:46.534 "num_base_bdevs_operational": 1, 00:22:46.534 "base_bdevs_list": [ 00:22:46.534 { 00:22:46.534 "name": null, 00:22:46.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.534 "is_configured": false, 00:22:46.534 "data_offset": 256, 00:22:46.534 "data_size": 7936 00:22:46.534 }, 00:22:46.534 { 00:22:46.534 "name": "pt2", 00:22:46.534 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:46.534 "is_configured": true, 00:22:46.534 "data_offset": 256, 00:22:46.534 "data_size": 7936 00:22:46.534 } 00:22:46.534 ] 00:22:46.534 }' 00:22:46.534 06:49:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:46.534 06:49:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:47.120 06:49:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:47.120 06:49:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:22:47.120 06:49:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.120 06:49:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:47.120 06:49:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.120 06:49:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:22:47.120 06:49:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:47.120 06:49:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:22:47.120 06:49:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.120 06:49:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:47.120 [2024-12-06 06:49:05.612873] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:47.120 06:49:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.120 06:49:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 1cbb5d03-be9d-434e-b5eb-39196368eacf '!=' 1cbb5d03-be9d-434e-b5eb-39196368eacf ']' 00:22:47.120 06:49:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86774 00:22:47.120 06:49:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86774 ']' 00:22:47.120 06:49:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86774 00:22:47.120 06:49:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:22:47.120 06:49:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.120 06:49:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86774 00:22:47.120 06:49:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:47.120 killing process with pid 86774 00:22:47.120 06:49:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:47.120 06:49:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86774' 00:22:47.120 06:49:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86774 00:22:47.120 [2024-12-06 06:49:05.689779] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:47.120 06:49:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86774 00:22:47.120 [2024-12-06 06:49:05.689898] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:47.120 [2024-12-06 06:49:05.689974] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:47.120 [2024-12-06 06:49:05.690000] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:22:47.379 [2024-12-06 06:49:05.881248] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:48.756 06:49:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:22:48.756 00:22:48.756 real 0m6.751s 00:22:48.756 user 0m10.647s 00:22:48.756 sys 0m1.011s 00:22:48.756 06:49:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:48.756 ************************************ 00:22:48.756 END TEST raid_superblock_test_4k 00:22:48.756 ************************************ 00:22:48.756 06:49:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:48.756 06:49:07 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:22:48.756 06:49:07 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:22:48.756 06:49:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:48.756 06:49:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:48.756 06:49:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:48.756 ************************************ 00:22:48.756 START TEST raid_rebuild_test_sb_4k 00:22:48.756 ************************************ 00:22:48.756 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:22:48.756 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:22:48.756 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:22:48.756 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:48.756 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:48.756 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:48.756 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:48.756 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:48.756 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:48.756 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:48.756 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:48.756 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:48.756 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:48.756 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:48.756 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:48.756 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:48.756 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:48.756 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:48.756 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:48.756 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:48.756 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:48.757 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:22:48.757 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:22:48.757 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:48.757 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:48.757 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=87108 00:22:48.757 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 87108 00:22:48.757 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:48.757 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 87108 ']' 00:22:48.757 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.757 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:48.757 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.757 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:48.757 06:49:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:48.757 [2024-12-06 06:49:07.143234] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:22:48.757 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:48.757 Zero copy mechanism will not be used. 00:22:48.757 [2024-12-06 06:49:07.143697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87108 ] 00:22:48.757 [2024-12-06 06:49:07.336186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.016 [2024-12-06 06:49:07.494870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.275 [2024-12-06 06:49:07.737715] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:49.275 [2024-12-06 06:49:07.737766] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:49.842 BaseBdev1_malloc 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:49.842 [2024-12-06 06:49:08.256081] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:49.842 [2024-12-06 06:49:08.256303] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:49.842 [2024-12-06 06:49:08.256377] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:49.842 [2024-12-06 06:49:08.256502] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:49.842 [2024-12-06 06:49:08.259317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:49.842 [2024-12-06 06:49:08.259366] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:49.842 BaseBdev1 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:49.842 BaseBdev2_malloc 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:49.842 [2024-12-06 06:49:08.313155] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:49.842 [2024-12-06 06:49:08.313415] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:49.842 [2024-12-06 06:49:08.313457] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:49.842 [2024-12-06 06:49:08.313476] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:49.842 [2024-12-06 06:49:08.316290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:49.842 [2024-12-06 06:49:08.316338] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:49.842 BaseBdev2 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:49.842 spare_malloc 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:49.842 spare_delay 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:49.842 [2024-12-06 06:49:08.384302] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:49.842 [2024-12-06 06:49:08.384571] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:49.842 [2024-12-06 06:49:08.384665] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:49.842 [2024-12-06 06:49:08.384699] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:49.842 [2024-12-06 06:49:08.388490] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:49.842 spare 00:22:49.842 [2024-12-06 06:49:08.388712] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:49.842 [2024-12-06 06:49:08.392959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:49.842 [2024-12-06 06:49:08.395784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:49.842 [2024-12-06 06:49:08.396111] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:49.842 [2024-12-06 06:49:08.396139] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:49.842 [2024-12-06 06:49:08.396448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:49.842 [2024-12-06 06:49:08.396709] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:49.842 [2024-12-06 06:49:08.396724] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:49.842 [2024-12-06 06:49:08.396975] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:49.842 "name": "raid_bdev1", 00:22:49.842 "uuid": "9bd28209-c60b-4692-a7c6-8bcebad6ce32", 00:22:49.842 "strip_size_kb": 0, 00:22:49.842 "state": "online", 00:22:49.842 "raid_level": "raid1", 00:22:49.842 "superblock": true, 00:22:49.842 "num_base_bdevs": 2, 00:22:49.842 "num_base_bdevs_discovered": 2, 00:22:49.842 "num_base_bdevs_operational": 2, 00:22:49.842 "base_bdevs_list": [ 00:22:49.842 { 00:22:49.842 "name": "BaseBdev1", 00:22:49.842 "uuid": "485b595d-f122-5288-97eb-546cbe47fbc7", 00:22:49.842 "is_configured": true, 00:22:49.842 "data_offset": 256, 00:22:49.842 "data_size": 7936 00:22:49.842 }, 00:22:49.842 { 00:22:49.842 "name": "BaseBdev2", 00:22:49.842 "uuid": "28b879c8-e395-5c62-b288-c6cb42854d19", 00:22:49.842 "is_configured": true, 00:22:49.842 "data_offset": 256, 00:22:49.842 "data_size": 7936 00:22:49.842 } 00:22:49.842 ] 00:22:49.842 }' 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:49.842 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:50.408 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:50.409 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.409 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:50.409 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:50.409 [2024-12-06 06:49:08.917644] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:50.409 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.409 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:22:50.409 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.409 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:50.409 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.409 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:50.409 06:49:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.409 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:22:50.409 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:50.409 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:22:50.409 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:22:50.409 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:22:50.409 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:50.409 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:50.409 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:50.409 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:50.409 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:50.409 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:22:50.409 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:50.409 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:50.409 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:50.974 [2024-12-06 06:49:09.317472] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:50.974 /dev/nbd0 00:22:50.974 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:50.974 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:50.975 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:50.975 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:22:50.975 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:50.975 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:50.975 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:50.975 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:22:50.975 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:50.975 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:50.975 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:50.975 1+0 records in 00:22:50.975 1+0 records out 00:22:50.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00065658 s, 6.2 MB/s 00:22:50.975 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:50.975 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:22:50.975 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:50.975 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:50.975 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:22:50.975 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:50.975 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:50.975 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:22:50.975 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:22:50.975 06:49:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:22:51.908 7936+0 records in 00:22:51.908 7936+0 records out 00:22:51.908 32505856 bytes (33 MB, 31 MiB) copied, 0.918145 s, 35.4 MB/s 00:22:51.908 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:51.908 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:51.908 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:51.908 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:51.908 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:22:51.908 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:51.909 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:52.167 [2024-12-06 06:49:10.579139] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:52.167 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:52.167 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:52.167 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:52.167 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:52.167 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:52.167 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:52.167 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:22:52.167 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:22:52.167 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:52.167 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.167 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:52.167 [2024-12-06 06:49:10.593746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:52.167 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.167 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:52.167 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:52.167 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:52.167 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:52.167 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:52.167 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:52.167 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:52.167 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:52.167 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:52.167 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:52.167 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.167 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.167 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.167 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:52.167 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.167 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:52.167 "name": "raid_bdev1", 00:22:52.167 "uuid": "9bd28209-c60b-4692-a7c6-8bcebad6ce32", 00:22:52.167 "strip_size_kb": 0, 00:22:52.167 "state": "online", 00:22:52.167 "raid_level": "raid1", 00:22:52.167 "superblock": true, 00:22:52.167 "num_base_bdevs": 2, 00:22:52.167 "num_base_bdevs_discovered": 1, 00:22:52.167 "num_base_bdevs_operational": 1, 00:22:52.167 "base_bdevs_list": [ 00:22:52.167 { 00:22:52.167 "name": null, 00:22:52.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.167 "is_configured": false, 00:22:52.167 "data_offset": 0, 00:22:52.167 "data_size": 7936 00:22:52.167 }, 00:22:52.167 { 00:22:52.167 "name": "BaseBdev2", 00:22:52.167 "uuid": "28b879c8-e395-5c62-b288-c6cb42854d19", 00:22:52.167 "is_configured": true, 00:22:52.167 "data_offset": 256, 00:22:52.167 "data_size": 7936 00:22:52.167 } 00:22:52.167 ] 00:22:52.167 }' 00:22:52.167 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:52.167 06:49:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:52.732 06:49:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:52.732 06:49:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.732 06:49:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:52.732 [2024-12-06 06:49:11.109935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:52.732 [2024-12-06 06:49:11.128056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:22:52.732 06:49:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.732 06:49:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:52.732 [2024-12-06 06:49:11.130798] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:53.667 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:53.667 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:53.667 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:53.667 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:53.667 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:53.667 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.667 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.667 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.667 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:53.667 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.667 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:53.667 "name": "raid_bdev1", 00:22:53.667 "uuid": "9bd28209-c60b-4692-a7c6-8bcebad6ce32", 00:22:53.667 "strip_size_kb": 0, 00:22:53.667 "state": "online", 00:22:53.667 "raid_level": "raid1", 00:22:53.667 "superblock": true, 00:22:53.667 "num_base_bdevs": 2, 00:22:53.667 "num_base_bdevs_discovered": 2, 00:22:53.667 "num_base_bdevs_operational": 2, 00:22:53.667 "process": { 00:22:53.667 "type": "rebuild", 00:22:53.667 "target": "spare", 00:22:53.667 "progress": { 00:22:53.667 "blocks": 2560, 00:22:53.667 "percent": 32 00:22:53.667 } 00:22:53.667 }, 00:22:53.667 "base_bdevs_list": [ 00:22:53.667 { 00:22:53.667 "name": "spare", 00:22:53.667 "uuid": "8631c1c1-32fc-55c0-81a4-8362879132e4", 00:22:53.667 "is_configured": true, 00:22:53.667 "data_offset": 256, 00:22:53.667 "data_size": 7936 00:22:53.667 }, 00:22:53.667 { 00:22:53.667 "name": "BaseBdev2", 00:22:53.667 "uuid": "28b879c8-e395-5c62-b288-c6cb42854d19", 00:22:53.667 "is_configured": true, 00:22:53.667 "data_offset": 256, 00:22:53.667 "data_size": 7936 00:22:53.667 } 00:22:53.667 ] 00:22:53.667 }' 00:22:53.667 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:53.667 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:53.667 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:53.667 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:53.667 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:53.667 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.667 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:53.926 [2024-12-06 06:49:12.315794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:53.926 [2024-12-06 06:49:12.339805] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:53.926 [2024-12-06 06:49:12.339898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:53.926 [2024-12-06 06:49:12.339922] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:53.926 [2024-12-06 06:49:12.339936] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:53.926 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.926 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:53.926 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:53.926 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:53.926 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:53.926 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:53.926 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:53.926 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:53.926 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:53.926 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:53.926 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:53.926 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.926 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.926 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:53.926 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.926 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.926 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:53.926 "name": "raid_bdev1", 00:22:53.926 "uuid": "9bd28209-c60b-4692-a7c6-8bcebad6ce32", 00:22:53.926 "strip_size_kb": 0, 00:22:53.926 "state": "online", 00:22:53.926 "raid_level": "raid1", 00:22:53.926 "superblock": true, 00:22:53.926 "num_base_bdevs": 2, 00:22:53.926 "num_base_bdevs_discovered": 1, 00:22:53.926 "num_base_bdevs_operational": 1, 00:22:53.926 "base_bdevs_list": [ 00:22:53.926 { 00:22:53.926 "name": null, 00:22:53.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:53.926 "is_configured": false, 00:22:53.926 "data_offset": 0, 00:22:53.926 "data_size": 7936 00:22:53.926 }, 00:22:53.926 { 00:22:53.926 "name": "BaseBdev2", 00:22:53.926 "uuid": "28b879c8-e395-5c62-b288-c6cb42854d19", 00:22:53.926 "is_configured": true, 00:22:53.926 "data_offset": 256, 00:22:53.926 "data_size": 7936 00:22:53.926 } 00:22:53.926 ] 00:22:53.926 }' 00:22:53.926 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:53.926 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:54.494 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:54.494 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:54.494 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:54.494 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:54.494 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:54.494 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:54.494 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.494 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.494 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:54.494 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.494 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:54.494 "name": "raid_bdev1", 00:22:54.494 "uuid": "9bd28209-c60b-4692-a7c6-8bcebad6ce32", 00:22:54.494 "strip_size_kb": 0, 00:22:54.494 "state": "online", 00:22:54.494 "raid_level": "raid1", 00:22:54.494 "superblock": true, 00:22:54.494 "num_base_bdevs": 2, 00:22:54.494 "num_base_bdevs_discovered": 1, 00:22:54.494 "num_base_bdevs_operational": 1, 00:22:54.494 "base_bdevs_list": [ 00:22:54.494 { 00:22:54.494 "name": null, 00:22:54.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:54.494 "is_configured": false, 00:22:54.494 "data_offset": 0, 00:22:54.494 "data_size": 7936 00:22:54.494 }, 00:22:54.494 { 00:22:54.494 "name": "BaseBdev2", 00:22:54.494 "uuid": "28b879c8-e395-5c62-b288-c6cb42854d19", 00:22:54.494 "is_configured": true, 00:22:54.494 "data_offset": 256, 00:22:54.494 "data_size": 7936 00:22:54.494 } 00:22:54.494 ] 00:22:54.494 }' 00:22:54.494 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:54.494 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:54.494 06:49:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:54.494 06:49:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:54.494 06:49:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:54.494 06:49:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.494 06:49:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:54.494 [2024-12-06 06:49:13.056779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:54.494 [2024-12-06 06:49:13.072378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:22:54.494 06:49:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.494 06:49:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:54.494 [2024-12-06 06:49:13.074845] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:55.443 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:55.443 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:55.443 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:55.443 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:55.443 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:55.710 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.710 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.710 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:55.710 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.710 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.710 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:55.710 "name": "raid_bdev1", 00:22:55.710 "uuid": "9bd28209-c60b-4692-a7c6-8bcebad6ce32", 00:22:55.710 "strip_size_kb": 0, 00:22:55.710 "state": "online", 00:22:55.710 "raid_level": "raid1", 00:22:55.710 "superblock": true, 00:22:55.710 "num_base_bdevs": 2, 00:22:55.710 "num_base_bdevs_discovered": 2, 00:22:55.710 "num_base_bdevs_operational": 2, 00:22:55.710 "process": { 00:22:55.710 "type": "rebuild", 00:22:55.710 "target": "spare", 00:22:55.710 "progress": { 00:22:55.710 "blocks": 2560, 00:22:55.710 "percent": 32 00:22:55.710 } 00:22:55.710 }, 00:22:55.710 "base_bdevs_list": [ 00:22:55.710 { 00:22:55.710 "name": "spare", 00:22:55.710 "uuid": "8631c1c1-32fc-55c0-81a4-8362879132e4", 00:22:55.710 "is_configured": true, 00:22:55.710 "data_offset": 256, 00:22:55.710 "data_size": 7936 00:22:55.710 }, 00:22:55.710 { 00:22:55.710 "name": "BaseBdev2", 00:22:55.710 "uuid": "28b879c8-e395-5c62-b288-c6cb42854d19", 00:22:55.710 "is_configured": true, 00:22:55.710 "data_offset": 256, 00:22:55.710 "data_size": 7936 00:22:55.710 } 00:22:55.710 ] 00:22:55.710 }' 00:22:55.710 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:55.710 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:55.710 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:55.710 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:55.710 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:22:55.710 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:22:55.711 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:22:55.711 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:22:55.711 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:22:55.711 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:22:55.711 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=734 00:22:55.711 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:55.711 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:55.711 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:55.711 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:55.711 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:55.711 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:55.711 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.711 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.711 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.711 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:55.711 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.711 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:55.711 "name": "raid_bdev1", 00:22:55.711 "uuid": "9bd28209-c60b-4692-a7c6-8bcebad6ce32", 00:22:55.711 "strip_size_kb": 0, 00:22:55.711 "state": "online", 00:22:55.711 "raid_level": "raid1", 00:22:55.711 "superblock": true, 00:22:55.711 "num_base_bdevs": 2, 00:22:55.711 "num_base_bdevs_discovered": 2, 00:22:55.711 "num_base_bdevs_operational": 2, 00:22:55.711 "process": { 00:22:55.711 "type": "rebuild", 00:22:55.711 "target": "spare", 00:22:55.711 "progress": { 00:22:55.711 "blocks": 2816, 00:22:55.711 "percent": 35 00:22:55.711 } 00:22:55.711 }, 00:22:55.711 "base_bdevs_list": [ 00:22:55.711 { 00:22:55.711 "name": "spare", 00:22:55.711 "uuid": "8631c1c1-32fc-55c0-81a4-8362879132e4", 00:22:55.711 "is_configured": true, 00:22:55.711 "data_offset": 256, 00:22:55.711 "data_size": 7936 00:22:55.711 }, 00:22:55.711 { 00:22:55.711 "name": "BaseBdev2", 00:22:55.711 "uuid": "28b879c8-e395-5c62-b288-c6cb42854d19", 00:22:55.711 "is_configured": true, 00:22:55.711 "data_offset": 256, 00:22:55.711 "data_size": 7936 00:22:55.711 } 00:22:55.711 ] 00:22:55.711 }' 00:22:55.711 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:55.711 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:55.711 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:55.971 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:55.971 06:49:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:56.931 06:49:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:56.931 06:49:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:56.931 06:49:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:56.931 06:49:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:56.931 06:49:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:56.931 06:49:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:56.931 06:49:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:56.931 06:49:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.931 06:49:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.931 06:49:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:56.931 06:49:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.931 06:49:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:56.931 "name": "raid_bdev1", 00:22:56.931 "uuid": "9bd28209-c60b-4692-a7c6-8bcebad6ce32", 00:22:56.931 "strip_size_kb": 0, 00:22:56.931 "state": "online", 00:22:56.931 "raid_level": "raid1", 00:22:56.931 "superblock": true, 00:22:56.931 "num_base_bdevs": 2, 00:22:56.931 "num_base_bdevs_discovered": 2, 00:22:56.931 "num_base_bdevs_operational": 2, 00:22:56.931 "process": { 00:22:56.931 "type": "rebuild", 00:22:56.931 "target": "spare", 00:22:56.931 "progress": { 00:22:56.931 "blocks": 5888, 00:22:56.931 "percent": 74 00:22:56.931 } 00:22:56.931 }, 00:22:56.931 "base_bdevs_list": [ 00:22:56.931 { 00:22:56.931 "name": "spare", 00:22:56.931 "uuid": "8631c1c1-32fc-55c0-81a4-8362879132e4", 00:22:56.931 "is_configured": true, 00:22:56.931 "data_offset": 256, 00:22:56.931 "data_size": 7936 00:22:56.931 }, 00:22:56.931 { 00:22:56.931 "name": "BaseBdev2", 00:22:56.931 "uuid": "28b879c8-e395-5c62-b288-c6cb42854d19", 00:22:56.931 "is_configured": true, 00:22:56.931 "data_offset": 256, 00:22:56.931 "data_size": 7936 00:22:56.931 } 00:22:56.931 ] 00:22:56.931 }' 00:22:56.931 06:49:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:56.931 06:49:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:56.931 06:49:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:56.931 06:49:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:56.931 06:49:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:57.866 [2024-12-06 06:49:16.198057] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:57.866 [2024-12-06 06:49:16.198181] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:57.866 [2024-12-06 06:49:16.198381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:58.124 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:58.124 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:58.124 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:58.124 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:58.124 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:58.124 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:58.124 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.124 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.124 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.124 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:58.124 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.124 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:58.124 "name": "raid_bdev1", 00:22:58.124 "uuid": "9bd28209-c60b-4692-a7c6-8bcebad6ce32", 00:22:58.124 "strip_size_kb": 0, 00:22:58.124 "state": "online", 00:22:58.124 "raid_level": "raid1", 00:22:58.124 "superblock": true, 00:22:58.124 "num_base_bdevs": 2, 00:22:58.124 "num_base_bdevs_discovered": 2, 00:22:58.124 "num_base_bdevs_operational": 2, 00:22:58.124 "base_bdevs_list": [ 00:22:58.124 { 00:22:58.124 "name": "spare", 00:22:58.124 "uuid": "8631c1c1-32fc-55c0-81a4-8362879132e4", 00:22:58.124 "is_configured": true, 00:22:58.124 "data_offset": 256, 00:22:58.124 "data_size": 7936 00:22:58.124 }, 00:22:58.124 { 00:22:58.124 "name": "BaseBdev2", 00:22:58.124 "uuid": "28b879c8-e395-5c62-b288-c6cb42854d19", 00:22:58.124 "is_configured": true, 00:22:58.124 "data_offset": 256, 00:22:58.124 "data_size": 7936 00:22:58.124 } 00:22:58.124 ] 00:22:58.124 }' 00:22:58.124 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:58.124 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:58.124 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:58.124 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:58.124 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:22:58.124 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:58.124 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:58.124 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:58.124 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:58.124 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:58.124 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.124 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.124 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.124 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:58.124 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.383 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:58.384 "name": "raid_bdev1", 00:22:58.384 "uuid": "9bd28209-c60b-4692-a7c6-8bcebad6ce32", 00:22:58.384 "strip_size_kb": 0, 00:22:58.384 "state": "online", 00:22:58.384 "raid_level": "raid1", 00:22:58.384 "superblock": true, 00:22:58.384 "num_base_bdevs": 2, 00:22:58.384 "num_base_bdevs_discovered": 2, 00:22:58.384 "num_base_bdevs_operational": 2, 00:22:58.384 "base_bdevs_list": [ 00:22:58.384 { 00:22:58.384 "name": "spare", 00:22:58.384 "uuid": "8631c1c1-32fc-55c0-81a4-8362879132e4", 00:22:58.384 "is_configured": true, 00:22:58.384 "data_offset": 256, 00:22:58.384 "data_size": 7936 00:22:58.384 }, 00:22:58.384 { 00:22:58.384 "name": "BaseBdev2", 00:22:58.384 "uuid": "28b879c8-e395-5c62-b288-c6cb42854d19", 00:22:58.384 "is_configured": true, 00:22:58.384 "data_offset": 256, 00:22:58.384 "data_size": 7936 00:22:58.384 } 00:22:58.384 ] 00:22:58.384 }' 00:22:58.384 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:58.384 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:58.384 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:58.384 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:58.384 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:58.384 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:58.384 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:58.384 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:58.384 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:58.384 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:58.384 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:58.384 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:58.384 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:58.384 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:58.384 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.384 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.384 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.384 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:58.384 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.384 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:58.384 "name": "raid_bdev1", 00:22:58.384 "uuid": "9bd28209-c60b-4692-a7c6-8bcebad6ce32", 00:22:58.384 "strip_size_kb": 0, 00:22:58.384 "state": "online", 00:22:58.384 "raid_level": "raid1", 00:22:58.384 "superblock": true, 00:22:58.384 "num_base_bdevs": 2, 00:22:58.384 "num_base_bdevs_discovered": 2, 00:22:58.384 "num_base_bdevs_operational": 2, 00:22:58.384 "base_bdevs_list": [ 00:22:58.384 { 00:22:58.384 "name": "spare", 00:22:58.384 "uuid": "8631c1c1-32fc-55c0-81a4-8362879132e4", 00:22:58.384 "is_configured": true, 00:22:58.384 "data_offset": 256, 00:22:58.384 "data_size": 7936 00:22:58.384 }, 00:22:58.384 { 00:22:58.384 "name": "BaseBdev2", 00:22:58.384 "uuid": "28b879c8-e395-5c62-b288-c6cb42854d19", 00:22:58.384 "is_configured": true, 00:22:58.384 "data_offset": 256, 00:22:58.384 "data_size": 7936 00:22:58.384 } 00:22:58.384 ] 00:22:58.384 }' 00:22:58.384 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:58.384 06:49:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:58.951 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:58.951 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.951 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:58.951 [2024-12-06 06:49:17.400234] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:58.951 [2024-12-06 06:49:17.400305] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:58.951 [2024-12-06 06:49:17.400402] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:58.951 [2024-12-06 06:49:17.400527] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:58.951 [2024-12-06 06:49:17.400576] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:58.951 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.951 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.951 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:22:58.951 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.951 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:58.951 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.951 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:58.952 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:58.952 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:58.952 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:58.952 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:58.952 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:58.952 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:58.952 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:58.952 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:58.952 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:22:58.952 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:58.952 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:58.952 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:59.210 /dev/nbd0 00:22:59.210 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:59.210 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:59.210 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:59.210 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:22:59.210 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:59.210 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:59.210 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:59.210 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:22:59.210 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:59.210 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:59.210 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:59.210 1+0 records in 00:22:59.210 1+0 records out 00:22:59.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421247 s, 9.7 MB/s 00:22:59.210 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:59.210 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:22:59.210 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:59.210 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:59.210 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:22:59.210 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:59.210 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:59.210 06:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:59.595 /dev/nbd1 00:22:59.595 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:59.595 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:59.595 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:59.595 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:22:59.595 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:59.595 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:59.595 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:59.595 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:22:59.595 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:59.595 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:59.595 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:59.595 1+0 records in 00:22:59.595 1+0 records out 00:22:59.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421943 s, 9.7 MB/s 00:22:59.595 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:59.595 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:22:59.595 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:59.595 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:59.595 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:22:59.595 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:59.595 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:59.595 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:59.860 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:59.860 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:59.860 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:59.860 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:59.860 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:22:59.860 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:59.860 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:00.120 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:00.120 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:00.120 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:00.120 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:00.120 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:00.120 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:00.120 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:23:00.120 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:23:00.120 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:00.120 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:23:00.379 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:00.379 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:00.379 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:00.379 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:00.379 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:00.379 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:00.379 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:23:00.379 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:23:00.379 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:23:00.379 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:23:00.379 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.379 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:00.379 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.379 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:00.379 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.379 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:00.379 [2024-12-06 06:49:18.954328] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:00.379 [2024-12-06 06:49:18.954404] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:00.379 [2024-12-06 06:49:18.954449] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:00.379 [2024-12-06 06:49:18.954473] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:00.379 [2024-12-06 06:49:18.957601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:00.379 [2024-12-06 06:49:18.957653] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:00.379 [2024-12-06 06:49:18.957794] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:00.379 [2024-12-06 06:49:18.957887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:00.379 [2024-12-06 06:49:18.958115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:00.379 spare 00:23:00.379 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.379 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:23:00.379 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.379 06:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:00.638 [2024-12-06 06:49:19.058265] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:23:00.638 [2024-12-06 06:49:19.058334] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:00.638 [2024-12-06 06:49:19.058768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:23:00.638 [2024-12-06 06:49:19.059111] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:23:00.638 [2024-12-06 06:49:19.059146] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:23:00.638 [2024-12-06 06:49:19.059410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:00.638 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.638 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:00.638 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:00.638 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:00.638 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:00.638 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:00.638 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:00.638 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:00.638 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:00.639 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:00.639 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:00.639 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:00.639 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.639 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.639 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:00.639 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.639 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:00.639 "name": "raid_bdev1", 00:23:00.639 "uuid": "9bd28209-c60b-4692-a7c6-8bcebad6ce32", 00:23:00.639 "strip_size_kb": 0, 00:23:00.639 "state": "online", 00:23:00.639 "raid_level": "raid1", 00:23:00.639 "superblock": true, 00:23:00.639 "num_base_bdevs": 2, 00:23:00.639 "num_base_bdevs_discovered": 2, 00:23:00.639 "num_base_bdevs_operational": 2, 00:23:00.639 "base_bdevs_list": [ 00:23:00.639 { 00:23:00.639 "name": "spare", 00:23:00.639 "uuid": "8631c1c1-32fc-55c0-81a4-8362879132e4", 00:23:00.639 "is_configured": true, 00:23:00.639 "data_offset": 256, 00:23:00.639 "data_size": 7936 00:23:00.639 }, 00:23:00.639 { 00:23:00.639 "name": "BaseBdev2", 00:23:00.639 "uuid": "28b879c8-e395-5c62-b288-c6cb42854d19", 00:23:00.639 "is_configured": true, 00:23:00.639 "data_offset": 256, 00:23:00.639 "data_size": 7936 00:23:00.639 } 00:23:00.639 ] 00:23:00.639 }' 00:23:00.639 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:00.639 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:01.206 "name": "raid_bdev1", 00:23:01.206 "uuid": "9bd28209-c60b-4692-a7c6-8bcebad6ce32", 00:23:01.206 "strip_size_kb": 0, 00:23:01.206 "state": "online", 00:23:01.206 "raid_level": "raid1", 00:23:01.206 "superblock": true, 00:23:01.206 "num_base_bdevs": 2, 00:23:01.206 "num_base_bdevs_discovered": 2, 00:23:01.206 "num_base_bdevs_operational": 2, 00:23:01.206 "base_bdevs_list": [ 00:23:01.206 { 00:23:01.206 "name": "spare", 00:23:01.206 "uuid": "8631c1c1-32fc-55c0-81a4-8362879132e4", 00:23:01.206 "is_configured": true, 00:23:01.206 "data_offset": 256, 00:23:01.206 "data_size": 7936 00:23:01.206 }, 00:23:01.206 { 00:23:01.206 "name": "BaseBdev2", 00:23:01.206 "uuid": "28b879c8-e395-5c62-b288-c6cb42854d19", 00:23:01.206 "is_configured": true, 00:23:01.206 "data_offset": 256, 00:23:01.206 "data_size": 7936 00:23:01.206 } 00:23:01.206 ] 00:23:01.206 }' 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:01.206 [2024-12-06 06:49:19.811589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:01.206 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.466 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:01.466 "name": "raid_bdev1", 00:23:01.466 "uuid": "9bd28209-c60b-4692-a7c6-8bcebad6ce32", 00:23:01.466 "strip_size_kb": 0, 00:23:01.466 "state": "online", 00:23:01.466 "raid_level": "raid1", 00:23:01.466 "superblock": true, 00:23:01.466 "num_base_bdevs": 2, 00:23:01.466 "num_base_bdevs_discovered": 1, 00:23:01.466 "num_base_bdevs_operational": 1, 00:23:01.466 "base_bdevs_list": [ 00:23:01.466 { 00:23:01.466 "name": null, 00:23:01.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.466 "is_configured": false, 00:23:01.466 "data_offset": 0, 00:23:01.466 "data_size": 7936 00:23:01.466 }, 00:23:01.466 { 00:23:01.466 "name": "BaseBdev2", 00:23:01.466 "uuid": "28b879c8-e395-5c62-b288-c6cb42854d19", 00:23:01.466 "is_configured": true, 00:23:01.466 "data_offset": 256, 00:23:01.466 "data_size": 7936 00:23:01.466 } 00:23:01.466 ] 00:23:01.466 }' 00:23:01.466 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:01.466 06:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:01.725 06:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:01.725 06:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.725 06:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:01.725 [2024-12-06 06:49:20.320736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:01.725 [2024-12-06 06:49:20.320990] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:01.725 [2024-12-06 06:49:20.321016] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:01.725 [2024-12-06 06:49:20.321065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:01.725 [2024-12-06 06:49:20.336534] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:23:01.725 06:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.725 06:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:23:01.725 [2024-12-06 06:49:20.339025] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:03.105 "name": "raid_bdev1", 00:23:03.105 "uuid": "9bd28209-c60b-4692-a7c6-8bcebad6ce32", 00:23:03.105 "strip_size_kb": 0, 00:23:03.105 "state": "online", 00:23:03.105 "raid_level": "raid1", 00:23:03.105 "superblock": true, 00:23:03.105 "num_base_bdevs": 2, 00:23:03.105 "num_base_bdevs_discovered": 2, 00:23:03.105 "num_base_bdevs_operational": 2, 00:23:03.105 "process": { 00:23:03.105 "type": "rebuild", 00:23:03.105 "target": "spare", 00:23:03.105 "progress": { 00:23:03.105 "blocks": 2560, 00:23:03.105 "percent": 32 00:23:03.105 } 00:23:03.105 }, 00:23:03.105 "base_bdevs_list": [ 00:23:03.105 { 00:23:03.105 "name": "spare", 00:23:03.105 "uuid": "8631c1c1-32fc-55c0-81a4-8362879132e4", 00:23:03.105 "is_configured": true, 00:23:03.105 "data_offset": 256, 00:23:03.105 "data_size": 7936 00:23:03.105 }, 00:23:03.105 { 00:23:03.105 "name": "BaseBdev2", 00:23:03.105 "uuid": "28b879c8-e395-5c62-b288-c6cb42854d19", 00:23:03.105 "is_configured": true, 00:23:03.105 "data_offset": 256, 00:23:03.105 "data_size": 7936 00:23:03.105 } 00:23:03.105 ] 00:23:03.105 }' 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:03.105 [2024-12-06 06:49:21.508646] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:03.105 [2024-12-06 06:49:21.548476] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:03.105 [2024-12-06 06:49:21.548580] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:03.105 [2024-12-06 06:49:21.548615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:03.105 [2024-12-06 06:49:21.548630] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.105 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:03.105 "name": "raid_bdev1", 00:23:03.105 "uuid": "9bd28209-c60b-4692-a7c6-8bcebad6ce32", 00:23:03.105 "strip_size_kb": 0, 00:23:03.105 "state": "online", 00:23:03.105 "raid_level": "raid1", 00:23:03.105 "superblock": true, 00:23:03.106 "num_base_bdevs": 2, 00:23:03.106 "num_base_bdevs_discovered": 1, 00:23:03.106 "num_base_bdevs_operational": 1, 00:23:03.106 "base_bdevs_list": [ 00:23:03.106 { 00:23:03.106 "name": null, 00:23:03.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.106 "is_configured": false, 00:23:03.106 "data_offset": 0, 00:23:03.106 "data_size": 7936 00:23:03.106 }, 00:23:03.106 { 00:23:03.106 "name": "BaseBdev2", 00:23:03.106 "uuid": "28b879c8-e395-5c62-b288-c6cb42854d19", 00:23:03.106 "is_configured": true, 00:23:03.106 "data_offset": 256, 00:23:03.106 "data_size": 7936 00:23:03.106 } 00:23:03.106 ] 00:23:03.106 }' 00:23:03.106 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:03.106 06:49:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:03.673 06:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:03.673 06:49:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.673 06:49:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:03.673 [2024-12-06 06:49:22.141070] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:03.673 [2024-12-06 06:49:22.141149] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:03.673 [2024-12-06 06:49:22.141181] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:03.673 [2024-12-06 06:49:22.141198] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:03.673 [2024-12-06 06:49:22.141807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:03.673 [2024-12-06 06:49:22.141856] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:03.673 [2024-12-06 06:49:22.141988] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:03.673 [2024-12-06 06:49:22.142013] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:03.673 [2024-12-06 06:49:22.142031] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:03.673 [2024-12-06 06:49:22.142068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:03.673 [2024-12-06 06:49:22.157448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:23:03.673 spare 00:23:03.673 06:49:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.673 06:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:23:03.673 [2024-12-06 06:49:22.159939] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:04.622 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:04.622 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:04.622 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:04.622 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:04.622 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:04.622 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.622 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.622 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.622 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:04.622 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.622 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:04.622 "name": "raid_bdev1", 00:23:04.622 "uuid": "9bd28209-c60b-4692-a7c6-8bcebad6ce32", 00:23:04.622 "strip_size_kb": 0, 00:23:04.622 "state": "online", 00:23:04.622 "raid_level": "raid1", 00:23:04.622 "superblock": true, 00:23:04.622 "num_base_bdevs": 2, 00:23:04.622 "num_base_bdevs_discovered": 2, 00:23:04.622 "num_base_bdevs_operational": 2, 00:23:04.622 "process": { 00:23:04.622 "type": "rebuild", 00:23:04.622 "target": "spare", 00:23:04.622 "progress": { 00:23:04.622 "blocks": 2560, 00:23:04.622 "percent": 32 00:23:04.622 } 00:23:04.622 }, 00:23:04.622 "base_bdevs_list": [ 00:23:04.622 { 00:23:04.622 "name": "spare", 00:23:04.622 "uuid": "8631c1c1-32fc-55c0-81a4-8362879132e4", 00:23:04.622 "is_configured": true, 00:23:04.622 "data_offset": 256, 00:23:04.622 "data_size": 7936 00:23:04.622 }, 00:23:04.622 { 00:23:04.622 "name": "BaseBdev2", 00:23:04.622 "uuid": "28b879c8-e395-5c62-b288-c6cb42854d19", 00:23:04.622 "is_configured": true, 00:23:04.622 "data_offset": 256, 00:23:04.622 "data_size": 7936 00:23:04.622 } 00:23:04.622 ] 00:23:04.622 }' 00:23:04.622 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:04.622 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:04.882 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:04.882 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:04.882 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:23:04.882 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.882 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:04.882 [2024-12-06 06:49:23.321681] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:04.882 [2024-12-06 06:49:23.368828] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:04.882 [2024-12-06 06:49:23.368907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:04.882 [2024-12-06 06:49:23.368935] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:04.882 [2024-12-06 06:49:23.368946] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:04.882 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.882 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:04.882 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:04.882 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:04.882 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:04.882 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:04.882 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:04.882 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:04.882 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:04.882 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:04.882 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:04.882 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.882 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.882 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:04.882 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.882 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.882 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:04.882 "name": "raid_bdev1", 00:23:04.882 "uuid": "9bd28209-c60b-4692-a7c6-8bcebad6ce32", 00:23:04.882 "strip_size_kb": 0, 00:23:04.882 "state": "online", 00:23:04.882 "raid_level": "raid1", 00:23:04.882 "superblock": true, 00:23:04.882 "num_base_bdevs": 2, 00:23:04.882 "num_base_bdevs_discovered": 1, 00:23:04.882 "num_base_bdevs_operational": 1, 00:23:04.882 "base_bdevs_list": [ 00:23:04.882 { 00:23:04.882 "name": null, 00:23:04.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.882 "is_configured": false, 00:23:04.882 "data_offset": 0, 00:23:04.882 "data_size": 7936 00:23:04.882 }, 00:23:04.882 { 00:23:04.882 "name": "BaseBdev2", 00:23:04.882 "uuid": "28b879c8-e395-5c62-b288-c6cb42854d19", 00:23:04.882 "is_configured": true, 00:23:04.882 "data_offset": 256, 00:23:04.883 "data_size": 7936 00:23:04.883 } 00:23:04.883 ] 00:23:04.883 }' 00:23:04.883 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:04.883 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:05.451 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:05.451 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:05.451 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:05.451 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:05.451 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:05.451 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:05.451 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.451 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.451 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:05.451 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.451 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:05.451 "name": "raid_bdev1", 00:23:05.451 "uuid": "9bd28209-c60b-4692-a7c6-8bcebad6ce32", 00:23:05.451 "strip_size_kb": 0, 00:23:05.451 "state": "online", 00:23:05.451 "raid_level": "raid1", 00:23:05.451 "superblock": true, 00:23:05.451 "num_base_bdevs": 2, 00:23:05.451 "num_base_bdevs_discovered": 1, 00:23:05.452 "num_base_bdevs_operational": 1, 00:23:05.452 "base_bdevs_list": [ 00:23:05.452 { 00:23:05.452 "name": null, 00:23:05.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.452 "is_configured": false, 00:23:05.452 "data_offset": 0, 00:23:05.452 "data_size": 7936 00:23:05.452 }, 00:23:05.452 { 00:23:05.452 "name": "BaseBdev2", 00:23:05.452 "uuid": "28b879c8-e395-5c62-b288-c6cb42854d19", 00:23:05.452 "is_configured": true, 00:23:05.452 "data_offset": 256, 00:23:05.452 "data_size": 7936 00:23:05.452 } 00:23:05.452 ] 00:23:05.452 }' 00:23:05.452 06:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:05.452 06:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:05.452 06:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:05.452 06:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:05.452 06:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:23:05.452 06:49:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.452 06:49:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:05.452 06:49:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.452 06:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:05.452 06:49:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.452 06:49:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:05.452 [2024-12-06 06:49:24.096742] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:05.452 [2024-12-06 06:49:24.096817] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:05.452 [2024-12-06 06:49:24.096856] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:05.711 [2024-12-06 06:49:24.096884] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:05.711 [2024-12-06 06:49:24.097513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:05.711 [2024-12-06 06:49:24.097575] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:05.711 [2024-12-06 06:49:24.097687] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:05.711 [2024-12-06 06:49:24.097709] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:05.711 [2024-12-06 06:49:24.097725] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:05.711 [2024-12-06 06:49:24.097739] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:23:05.711 BaseBdev1 00:23:05.711 06:49:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.711 06:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:23:06.645 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:06.645 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:06.645 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:06.645 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:06.645 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:06.645 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:06.645 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:06.645 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:06.645 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:06.645 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:06.645 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:06.645 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.645 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.645 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:06.645 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.645 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:06.645 "name": "raid_bdev1", 00:23:06.645 "uuid": "9bd28209-c60b-4692-a7c6-8bcebad6ce32", 00:23:06.645 "strip_size_kb": 0, 00:23:06.645 "state": "online", 00:23:06.645 "raid_level": "raid1", 00:23:06.645 "superblock": true, 00:23:06.645 "num_base_bdevs": 2, 00:23:06.645 "num_base_bdevs_discovered": 1, 00:23:06.645 "num_base_bdevs_operational": 1, 00:23:06.645 "base_bdevs_list": [ 00:23:06.645 { 00:23:06.645 "name": null, 00:23:06.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.645 "is_configured": false, 00:23:06.645 "data_offset": 0, 00:23:06.645 "data_size": 7936 00:23:06.645 }, 00:23:06.645 { 00:23:06.645 "name": "BaseBdev2", 00:23:06.645 "uuid": "28b879c8-e395-5c62-b288-c6cb42854d19", 00:23:06.645 "is_configured": true, 00:23:06.645 "data_offset": 256, 00:23:06.645 "data_size": 7936 00:23:06.645 } 00:23:06.645 ] 00:23:06.645 }' 00:23:06.645 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:06.645 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:07.212 "name": "raid_bdev1", 00:23:07.212 "uuid": "9bd28209-c60b-4692-a7c6-8bcebad6ce32", 00:23:07.212 "strip_size_kb": 0, 00:23:07.212 "state": "online", 00:23:07.212 "raid_level": "raid1", 00:23:07.212 "superblock": true, 00:23:07.212 "num_base_bdevs": 2, 00:23:07.212 "num_base_bdevs_discovered": 1, 00:23:07.212 "num_base_bdevs_operational": 1, 00:23:07.212 "base_bdevs_list": [ 00:23:07.212 { 00:23:07.212 "name": null, 00:23:07.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.212 "is_configured": false, 00:23:07.212 "data_offset": 0, 00:23:07.212 "data_size": 7936 00:23:07.212 }, 00:23:07.212 { 00:23:07.212 "name": "BaseBdev2", 00:23:07.212 "uuid": "28b879c8-e395-5c62-b288-c6cb42854d19", 00:23:07.212 "is_configured": true, 00:23:07.212 "data_offset": 256, 00:23:07.212 "data_size": 7936 00:23:07.212 } 00:23:07.212 ] 00:23:07.212 }' 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:07.212 [2024-12-06 06:49:25.793480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:07.212 [2024-12-06 06:49:25.793723] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:07.212 [2024-12-06 06:49:25.793748] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:07.212 request: 00:23:07.212 { 00:23:07.212 "base_bdev": "BaseBdev1", 00:23:07.212 "raid_bdev": "raid_bdev1", 00:23:07.212 "method": "bdev_raid_add_base_bdev", 00:23:07.212 "req_id": 1 00:23:07.212 } 00:23:07.212 Got JSON-RPC error response 00:23:07.212 response: 00:23:07.212 { 00:23:07.212 "code": -22, 00:23:07.212 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:07.212 } 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:07.212 06:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:23:08.193 06:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:08.193 06:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:08.193 06:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:08.193 06:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:08.193 06:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:08.193 06:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:08.193 06:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:08.193 06:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:08.193 06:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:08.193 06:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:08.193 06:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.193 06:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.193 06:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.193 06:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:08.193 06:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.460 06:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:08.460 "name": "raid_bdev1", 00:23:08.460 "uuid": "9bd28209-c60b-4692-a7c6-8bcebad6ce32", 00:23:08.460 "strip_size_kb": 0, 00:23:08.460 "state": "online", 00:23:08.460 "raid_level": "raid1", 00:23:08.460 "superblock": true, 00:23:08.460 "num_base_bdevs": 2, 00:23:08.460 "num_base_bdevs_discovered": 1, 00:23:08.460 "num_base_bdevs_operational": 1, 00:23:08.460 "base_bdevs_list": [ 00:23:08.460 { 00:23:08.460 "name": null, 00:23:08.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.460 "is_configured": false, 00:23:08.460 "data_offset": 0, 00:23:08.460 "data_size": 7936 00:23:08.460 }, 00:23:08.460 { 00:23:08.460 "name": "BaseBdev2", 00:23:08.460 "uuid": "28b879c8-e395-5c62-b288-c6cb42854d19", 00:23:08.460 "is_configured": true, 00:23:08.460 "data_offset": 256, 00:23:08.460 "data_size": 7936 00:23:08.460 } 00:23:08.460 ] 00:23:08.460 }' 00:23:08.460 06:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:08.460 06:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:08.719 06:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:08.719 06:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:08.719 06:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:08.719 06:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:08.719 06:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:08.719 06:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.719 06:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.719 06:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.719 06:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:08.719 06:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.977 06:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:08.977 "name": "raid_bdev1", 00:23:08.977 "uuid": "9bd28209-c60b-4692-a7c6-8bcebad6ce32", 00:23:08.977 "strip_size_kb": 0, 00:23:08.977 "state": "online", 00:23:08.977 "raid_level": "raid1", 00:23:08.977 "superblock": true, 00:23:08.977 "num_base_bdevs": 2, 00:23:08.977 "num_base_bdevs_discovered": 1, 00:23:08.977 "num_base_bdevs_operational": 1, 00:23:08.977 "base_bdevs_list": [ 00:23:08.977 { 00:23:08.977 "name": null, 00:23:08.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.977 "is_configured": false, 00:23:08.977 "data_offset": 0, 00:23:08.977 "data_size": 7936 00:23:08.977 }, 00:23:08.977 { 00:23:08.977 "name": "BaseBdev2", 00:23:08.977 "uuid": "28b879c8-e395-5c62-b288-c6cb42854d19", 00:23:08.977 "is_configured": true, 00:23:08.977 "data_offset": 256, 00:23:08.977 "data_size": 7936 00:23:08.977 } 00:23:08.977 ] 00:23:08.977 }' 00:23:08.977 06:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:08.977 06:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:08.977 06:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:08.977 06:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:08.977 06:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 87108 00:23:08.977 06:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 87108 ']' 00:23:08.977 06:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 87108 00:23:08.977 06:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:23:08.977 06:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:08.977 06:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87108 00:23:08.977 06:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:08.977 06:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:08.977 killing process with pid 87108 00:23:08.977 06:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87108' 00:23:08.978 06:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 87108 00:23:08.978 Received shutdown signal, test time was about 60.000000 seconds 00:23:08.978 00:23:08.978 Latency(us) 00:23:08.978 [2024-12-06T06:49:27.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.978 [2024-12-06T06:49:27.625Z] =================================================================================================================== 00:23:08.978 [2024-12-06T06:49:27.625Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:08.978 [2024-12-06 06:49:27.526651] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:08.978 06:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 87108 00:23:08.978 [2024-12-06 06:49:27.526819] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:08.978 [2024-12-06 06:49:27.526889] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:08.978 [2024-12-06 06:49:27.526910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:23:09.235 [2024-12-06 06:49:27.804025] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:10.609 06:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:23:10.609 00:23:10.609 real 0m21.855s 00:23:10.609 user 0m29.756s 00:23:10.609 sys 0m2.522s 00:23:10.609 06:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:10.609 ************************************ 00:23:10.609 END TEST raid_rebuild_test_sb_4k 00:23:10.609 ************************************ 00:23:10.609 06:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:10.609 06:49:28 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:23:10.609 06:49:28 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:23:10.609 06:49:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:10.609 06:49:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:10.609 06:49:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:10.609 ************************************ 00:23:10.609 START TEST raid_state_function_test_sb_md_separate 00:23:10.609 ************************************ 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87817 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87817' 00:23:10.609 Process raid pid: 87817 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87817 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87817 ']' 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:10.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:10.609 06:49:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:10.609 [2024-12-06 06:49:29.043795] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:23:10.609 [2024-12-06 06:49:29.043945] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.609 [2024-12-06 06:49:29.224291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.869 [2024-12-06 06:49:29.357789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.126 [2024-12-06 06:49:29.567551] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:11.127 [2024-12-06 06:49:29.567623] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:11.692 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:11.692 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:23:11.692 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:11.692 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.692 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:11.692 [2024-12-06 06:49:30.080678] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:11.692 [2024-12-06 06:49:30.080740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:11.693 [2024-12-06 06:49:30.080757] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:11.693 [2024-12-06 06:49:30.080774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:11.693 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.693 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:11.693 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:11.693 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:11.693 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:11.693 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:11.693 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:11.693 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:11.693 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:11.693 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:11.693 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:11.693 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.693 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.693 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:11.693 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:11.693 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.693 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:11.693 "name": "Existed_Raid", 00:23:11.693 "uuid": "7eaaf228-95f8-4642-8d3e-c21c8f3ca120", 00:23:11.693 "strip_size_kb": 0, 00:23:11.693 "state": "configuring", 00:23:11.693 "raid_level": "raid1", 00:23:11.693 "superblock": true, 00:23:11.693 "num_base_bdevs": 2, 00:23:11.693 "num_base_bdevs_discovered": 0, 00:23:11.693 "num_base_bdevs_operational": 2, 00:23:11.693 "base_bdevs_list": [ 00:23:11.693 { 00:23:11.693 "name": "BaseBdev1", 00:23:11.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.693 "is_configured": false, 00:23:11.693 "data_offset": 0, 00:23:11.693 "data_size": 0 00:23:11.693 }, 00:23:11.693 { 00:23:11.693 "name": "BaseBdev2", 00:23:11.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.693 "is_configured": false, 00:23:11.693 "data_offset": 0, 00:23:11.693 "data_size": 0 00:23:11.693 } 00:23:11.693 ] 00:23:11.693 }' 00:23:11.693 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:11.693 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.275 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:12.275 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.275 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.275 [2024-12-06 06:49:30.612792] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:12.275 [2024-12-06 06:49:30.612838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:12.275 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.275 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:12.275 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.275 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.275 [2024-12-06 06:49:30.620776] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:12.275 [2024-12-06 06:49:30.620998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:12.275 [2024-12-06 06:49:30.621149] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:12.275 [2024-12-06 06:49:30.621237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:12.275 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.275 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:23:12.275 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.275 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.275 [2024-12-06 06:49:30.667269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:12.275 BaseBdev1 00:23:12.275 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.276 [ 00:23:12.276 { 00:23:12.276 "name": "BaseBdev1", 00:23:12.276 "aliases": [ 00:23:12.276 "a6dfedf7-38cb-4108-8c88-d37ffab74782" 00:23:12.276 ], 00:23:12.276 "product_name": "Malloc disk", 00:23:12.276 "block_size": 4096, 00:23:12.276 "num_blocks": 8192, 00:23:12.276 "uuid": "a6dfedf7-38cb-4108-8c88-d37ffab74782", 00:23:12.276 "md_size": 32, 00:23:12.276 "md_interleave": false, 00:23:12.276 "dif_type": 0, 00:23:12.276 "assigned_rate_limits": { 00:23:12.276 "rw_ios_per_sec": 0, 00:23:12.276 "rw_mbytes_per_sec": 0, 00:23:12.276 "r_mbytes_per_sec": 0, 00:23:12.276 "w_mbytes_per_sec": 0 00:23:12.276 }, 00:23:12.276 "claimed": true, 00:23:12.276 "claim_type": "exclusive_write", 00:23:12.276 "zoned": false, 00:23:12.276 "supported_io_types": { 00:23:12.276 "read": true, 00:23:12.276 "write": true, 00:23:12.276 "unmap": true, 00:23:12.276 "flush": true, 00:23:12.276 "reset": true, 00:23:12.276 "nvme_admin": false, 00:23:12.276 "nvme_io": false, 00:23:12.276 "nvme_io_md": false, 00:23:12.276 "write_zeroes": true, 00:23:12.276 "zcopy": true, 00:23:12.276 "get_zone_info": false, 00:23:12.276 "zone_management": false, 00:23:12.276 "zone_append": false, 00:23:12.276 "compare": false, 00:23:12.276 "compare_and_write": false, 00:23:12.276 "abort": true, 00:23:12.276 "seek_hole": false, 00:23:12.276 "seek_data": false, 00:23:12.276 "copy": true, 00:23:12.276 "nvme_iov_md": false 00:23:12.276 }, 00:23:12.276 "memory_domains": [ 00:23:12.276 { 00:23:12.276 "dma_device_id": "system", 00:23:12.276 "dma_device_type": 1 00:23:12.276 }, 00:23:12.276 { 00:23:12.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.276 "dma_device_type": 2 00:23:12.276 } 00:23:12.276 ], 00:23:12.276 "driver_specific": {} 00:23:12.276 } 00:23:12.276 ] 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:12.276 "name": "Existed_Raid", 00:23:12.276 "uuid": "006942d8-0c83-44b6-980d-27fe20bb9866", 00:23:12.276 "strip_size_kb": 0, 00:23:12.276 "state": "configuring", 00:23:12.276 "raid_level": "raid1", 00:23:12.276 "superblock": true, 00:23:12.276 "num_base_bdevs": 2, 00:23:12.276 "num_base_bdevs_discovered": 1, 00:23:12.276 "num_base_bdevs_operational": 2, 00:23:12.276 "base_bdevs_list": [ 00:23:12.276 { 00:23:12.276 "name": "BaseBdev1", 00:23:12.276 "uuid": "a6dfedf7-38cb-4108-8c88-d37ffab74782", 00:23:12.276 "is_configured": true, 00:23:12.276 "data_offset": 256, 00:23:12.276 "data_size": 7936 00:23:12.276 }, 00:23:12.276 { 00:23:12.276 "name": "BaseBdev2", 00:23:12.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:12.276 "is_configured": false, 00:23:12.276 "data_offset": 0, 00:23:12.276 "data_size": 0 00:23:12.276 } 00:23:12.276 ] 00:23:12.276 }' 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:12.276 06:49:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.842 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:12.842 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.842 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.842 [2024-12-06 06:49:31.195482] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:12.842 [2024-12-06 06:49:31.195681] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:12.842 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.842 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:12.842 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.842 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.842 [2024-12-06 06:49:31.203511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:12.842 [2024-12-06 06:49:31.206149] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:12.842 [2024-12-06 06:49:31.206321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:12.842 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.842 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:12.842 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:12.842 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:12.842 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:12.842 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:12.842 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:12.842 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:12.842 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:12.842 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:12.842 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:12.843 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:12.843 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:12.843 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:12.843 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.843 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.843 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:12.843 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.843 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:12.843 "name": "Existed_Raid", 00:23:12.843 "uuid": "889857db-cb49-4285-8071-28be7a2aae1e", 00:23:12.843 "strip_size_kb": 0, 00:23:12.843 "state": "configuring", 00:23:12.843 "raid_level": "raid1", 00:23:12.843 "superblock": true, 00:23:12.843 "num_base_bdevs": 2, 00:23:12.843 "num_base_bdevs_discovered": 1, 00:23:12.843 "num_base_bdevs_operational": 2, 00:23:12.843 "base_bdevs_list": [ 00:23:12.843 { 00:23:12.843 "name": "BaseBdev1", 00:23:12.843 "uuid": "a6dfedf7-38cb-4108-8c88-d37ffab74782", 00:23:12.843 "is_configured": true, 00:23:12.843 "data_offset": 256, 00:23:12.843 "data_size": 7936 00:23:12.843 }, 00:23:12.843 { 00:23:12.843 "name": "BaseBdev2", 00:23:12.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:12.843 "is_configured": false, 00:23:12.843 "data_offset": 0, 00:23:12.843 "data_size": 0 00:23:12.843 } 00:23:12.843 ] 00:23:12.843 }' 00:23:12.843 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:12.843 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.102 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:23:13.102 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.102 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.360 [2024-12-06 06:49:31.782070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:13.360 BaseBdev2 00:23:13.360 [2024-12-06 06:49:31.782615] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:13.360 [2024-12-06 06:49:31.782645] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:13.360 [2024-12-06 06:49:31.782762] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:13.360 [2024-12-06 06:49:31.782930] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:13.360 [2024-12-06 06:49:31.782950] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:13.360 [2024-12-06 06:49:31.783099] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:13.360 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.360 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:13.360 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:13.360 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:13.360 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:23:13.360 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:13.360 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:13.360 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:13.360 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.360 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.360 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.360 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:13.360 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.360 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.361 [ 00:23:13.361 { 00:23:13.361 "name": "BaseBdev2", 00:23:13.361 "aliases": [ 00:23:13.361 "e559dfc4-0023-45eb-89c3-538f1264be3a" 00:23:13.361 ], 00:23:13.361 "product_name": "Malloc disk", 00:23:13.361 "block_size": 4096, 00:23:13.361 "num_blocks": 8192, 00:23:13.361 "uuid": "e559dfc4-0023-45eb-89c3-538f1264be3a", 00:23:13.361 "md_size": 32, 00:23:13.361 "md_interleave": false, 00:23:13.361 "dif_type": 0, 00:23:13.361 "assigned_rate_limits": { 00:23:13.361 "rw_ios_per_sec": 0, 00:23:13.361 "rw_mbytes_per_sec": 0, 00:23:13.361 "r_mbytes_per_sec": 0, 00:23:13.361 "w_mbytes_per_sec": 0 00:23:13.361 }, 00:23:13.361 "claimed": true, 00:23:13.361 "claim_type": "exclusive_write", 00:23:13.361 "zoned": false, 00:23:13.361 "supported_io_types": { 00:23:13.361 "read": true, 00:23:13.361 "write": true, 00:23:13.361 "unmap": true, 00:23:13.361 "flush": true, 00:23:13.361 "reset": true, 00:23:13.361 "nvme_admin": false, 00:23:13.361 "nvme_io": false, 00:23:13.361 "nvme_io_md": false, 00:23:13.361 "write_zeroes": true, 00:23:13.361 "zcopy": true, 00:23:13.361 "get_zone_info": false, 00:23:13.361 "zone_management": false, 00:23:13.361 "zone_append": false, 00:23:13.361 "compare": false, 00:23:13.361 "compare_and_write": false, 00:23:13.361 "abort": true, 00:23:13.361 "seek_hole": false, 00:23:13.361 "seek_data": false, 00:23:13.361 "copy": true, 00:23:13.361 "nvme_iov_md": false 00:23:13.361 }, 00:23:13.361 "memory_domains": [ 00:23:13.361 { 00:23:13.361 "dma_device_id": "system", 00:23:13.361 "dma_device_type": 1 00:23:13.361 }, 00:23:13.361 { 00:23:13.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:13.361 "dma_device_type": 2 00:23:13.361 } 00:23:13.361 ], 00:23:13.361 "driver_specific": {} 00:23:13.361 } 00:23:13.361 ] 00:23:13.361 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.361 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:23:13.361 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:13.361 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:13.361 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:23:13.361 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:13.361 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:13.361 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:13.361 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:13.361 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:13.361 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:13.361 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:13.361 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:13.361 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:13.361 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:13.361 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:13.361 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.361 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.361 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.361 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:13.361 "name": "Existed_Raid", 00:23:13.361 "uuid": "889857db-cb49-4285-8071-28be7a2aae1e", 00:23:13.361 "strip_size_kb": 0, 00:23:13.361 "state": "online", 00:23:13.361 "raid_level": "raid1", 00:23:13.361 "superblock": true, 00:23:13.361 "num_base_bdevs": 2, 00:23:13.361 "num_base_bdevs_discovered": 2, 00:23:13.361 "num_base_bdevs_operational": 2, 00:23:13.361 "base_bdevs_list": [ 00:23:13.361 { 00:23:13.361 "name": "BaseBdev1", 00:23:13.361 "uuid": "a6dfedf7-38cb-4108-8c88-d37ffab74782", 00:23:13.361 "is_configured": true, 00:23:13.361 "data_offset": 256, 00:23:13.361 "data_size": 7936 00:23:13.361 }, 00:23:13.361 { 00:23:13.361 "name": "BaseBdev2", 00:23:13.361 "uuid": "e559dfc4-0023-45eb-89c3-538f1264be3a", 00:23:13.361 "is_configured": true, 00:23:13.361 "data_offset": 256, 00:23:13.361 "data_size": 7936 00:23:13.361 } 00:23:13.361 ] 00:23:13.361 }' 00:23:13.361 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:13.361 06:49:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.928 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:13.928 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:13.928 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:13.928 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:13.928 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:23:13.928 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:13.928 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:13.928 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:13.928 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.928 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.928 [2024-12-06 06:49:32.346923] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:13.928 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.928 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:13.928 "name": "Existed_Raid", 00:23:13.928 "aliases": [ 00:23:13.928 "889857db-cb49-4285-8071-28be7a2aae1e" 00:23:13.928 ], 00:23:13.928 "product_name": "Raid Volume", 00:23:13.928 "block_size": 4096, 00:23:13.928 "num_blocks": 7936, 00:23:13.928 "uuid": "889857db-cb49-4285-8071-28be7a2aae1e", 00:23:13.928 "md_size": 32, 00:23:13.928 "md_interleave": false, 00:23:13.928 "dif_type": 0, 00:23:13.928 "assigned_rate_limits": { 00:23:13.928 "rw_ios_per_sec": 0, 00:23:13.928 "rw_mbytes_per_sec": 0, 00:23:13.928 "r_mbytes_per_sec": 0, 00:23:13.928 "w_mbytes_per_sec": 0 00:23:13.928 }, 00:23:13.928 "claimed": false, 00:23:13.928 "zoned": false, 00:23:13.928 "supported_io_types": { 00:23:13.928 "read": true, 00:23:13.928 "write": true, 00:23:13.928 "unmap": false, 00:23:13.928 "flush": false, 00:23:13.928 "reset": true, 00:23:13.928 "nvme_admin": false, 00:23:13.928 "nvme_io": false, 00:23:13.929 "nvme_io_md": false, 00:23:13.929 "write_zeroes": true, 00:23:13.929 "zcopy": false, 00:23:13.929 "get_zone_info": false, 00:23:13.929 "zone_management": false, 00:23:13.929 "zone_append": false, 00:23:13.929 "compare": false, 00:23:13.929 "compare_and_write": false, 00:23:13.929 "abort": false, 00:23:13.929 "seek_hole": false, 00:23:13.929 "seek_data": false, 00:23:13.929 "copy": false, 00:23:13.929 "nvme_iov_md": false 00:23:13.929 }, 00:23:13.929 "memory_domains": [ 00:23:13.929 { 00:23:13.929 "dma_device_id": "system", 00:23:13.929 "dma_device_type": 1 00:23:13.929 }, 00:23:13.929 { 00:23:13.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:13.929 "dma_device_type": 2 00:23:13.929 }, 00:23:13.929 { 00:23:13.929 "dma_device_id": "system", 00:23:13.929 "dma_device_type": 1 00:23:13.929 }, 00:23:13.929 { 00:23:13.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:13.929 "dma_device_type": 2 00:23:13.929 } 00:23:13.929 ], 00:23:13.929 "driver_specific": { 00:23:13.929 "raid": { 00:23:13.929 "uuid": "889857db-cb49-4285-8071-28be7a2aae1e", 00:23:13.929 "strip_size_kb": 0, 00:23:13.929 "state": "online", 00:23:13.929 "raid_level": "raid1", 00:23:13.929 "superblock": true, 00:23:13.929 "num_base_bdevs": 2, 00:23:13.929 "num_base_bdevs_discovered": 2, 00:23:13.929 "num_base_bdevs_operational": 2, 00:23:13.929 "base_bdevs_list": [ 00:23:13.929 { 00:23:13.929 "name": "BaseBdev1", 00:23:13.929 "uuid": "a6dfedf7-38cb-4108-8c88-d37ffab74782", 00:23:13.929 "is_configured": true, 00:23:13.929 "data_offset": 256, 00:23:13.929 "data_size": 7936 00:23:13.929 }, 00:23:13.929 { 00:23:13.929 "name": "BaseBdev2", 00:23:13.929 "uuid": "e559dfc4-0023-45eb-89c3-538f1264be3a", 00:23:13.929 "is_configured": true, 00:23:13.929 "data_offset": 256, 00:23:13.929 "data_size": 7936 00:23:13.929 } 00:23:13.929 ] 00:23:13.929 } 00:23:13.929 } 00:23:13.929 }' 00:23:13.929 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:13.929 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:13.929 BaseBdev2' 00:23:13.929 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:13.929 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:23:13.929 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:13.929 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:13.929 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:13.929 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.929 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.929 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.929 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:13.929 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:13.929 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:13.929 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:13.929 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.929 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.929 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:13.929 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.188 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:14.188 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:14.188 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:14.188 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.188 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:14.188 [2024-12-06 06:49:32.602668] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:14.188 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.188 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:14.188 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:23:14.188 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:14.188 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:23:14.188 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:23:14.188 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:23:14.188 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:14.188 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:14.188 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:14.188 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:14.188 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:14.188 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:14.188 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:14.188 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:14.188 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:14.188 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:14.188 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:14.188 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.188 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:14.188 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.188 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:14.188 "name": "Existed_Raid", 00:23:14.188 "uuid": "889857db-cb49-4285-8071-28be7a2aae1e", 00:23:14.188 "strip_size_kb": 0, 00:23:14.188 "state": "online", 00:23:14.188 "raid_level": "raid1", 00:23:14.188 "superblock": true, 00:23:14.188 "num_base_bdevs": 2, 00:23:14.188 "num_base_bdevs_discovered": 1, 00:23:14.188 "num_base_bdevs_operational": 1, 00:23:14.188 "base_bdevs_list": [ 00:23:14.188 { 00:23:14.188 "name": null, 00:23:14.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.188 "is_configured": false, 00:23:14.188 "data_offset": 0, 00:23:14.188 "data_size": 7936 00:23:14.188 }, 00:23:14.188 { 00:23:14.188 "name": "BaseBdev2", 00:23:14.188 "uuid": "e559dfc4-0023-45eb-89c3-538f1264be3a", 00:23:14.188 "is_configured": true, 00:23:14.188 "data_offset": 256, 00:23:14.188 "data_size": 7936 00:23:14.188 } 00:23:14.188 ] 00:23:14.188 }' 00:23:14.188 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:14.188 06:49:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:14.775 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:14.775 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:14.775 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:14.775 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:14.775 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.775 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:14.775 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.775 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:14.775 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:14.775 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:14.775 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.775 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:14.775 [2024-12-06 06:49:33.284198] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:14.775 [2024-12-06 06:49:33.284739] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:14.775 [2024-12-06 06:49:33.388988] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:14.775 [2024-12-06 06:49:33.389094] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:14.775 [2024-12-06 06:49:33.389123] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:14.775 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.775 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:14.775 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:14.775 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:14.775 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.775 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:14.775 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:14.775 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.033 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:15.033 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:15.033 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:23:15.033 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87817 00:23:15.033 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87817 ']' 00:23:15.033 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87817 00:23:15.033 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:23:15.033 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:15.033 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87817 00:23:15.033 killing process with pid 87817 00:23:15.033 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:15.033 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:15.033 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87817' 00:23:15.033 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87817 00:23:15.033 06:49:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87817 00:23:15.033 [2024-12-06 06:49:33.492347] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:15.033 [2024-12-06 06:49:33.508380] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:16.484 ************************************ 00:23:16.484 END TEST raid_state_function_test_sb_md_separate 00:23:16.484 ************************************ 00:23:16.484 06:49:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:23:16.484 00:23:16.484 real 0m5.740s 00:23:16.484 user 0m8.522s 00:23:16.484 sys 0m0.867s 00:23:16.484 06:49:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:16.484 06:49:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:16.484 06:49:34 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:23:16.484 06:49:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:16.484 06:49:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:16.484 06:49:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:16.484 ************************************ 00:23:16.484 START TEST raid_superblock_test_md_separate 00:23:16.484 ************************************ 00:23:16.484 06:49:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:23:16.484 06:49:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:23:16.484 06:49:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:23:16.484 06:49:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:23:16.484 06:49:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:23:16.484 06:49:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:23:16.484 06:49:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:23:16.484 06:49:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:23:16.484 06:49:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:23:16.484 06:49:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:23:16.484 06:49:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:23:16.484 06:49:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:23:16.484 06:49:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:23:16.484 06:49:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:23:16.484 06:49:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:23:16.484 06:49:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:23:16.484 06:49:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=88068 00:23:16.484 06:49:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 88068 00:23:16.484 06:49:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:23:16.485 06:49:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88068 ']' 00:23:16.485 06:49:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.485 06:49:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:16.485 06:49:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.485 06:49:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:16.485 06:49:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:16.485 [2024-12-06 06:49:34.827516] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:23:16.485 [2024-12-06 06:49:34.827737] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88068 ] 00:23:16.485 [2024-12-06 06:49:35.011000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.756 [2024-12-06 06:49:35.194264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.013 [2024-12-06 06:49:35.499576] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:17.013 [2024-12-06 06:49:35.499934] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:17.271 06:49:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:17.271 06:49:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:23:17.271 06:49:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:23:17.271 06:49:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:17.271 06:49:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:23:17.271 06:49:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:23:17.271 06:49:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:17.271 06:49:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:17.271 06:49:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:17.271 06:49:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:17.271 06:49:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:23:17.271 06:49:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.271 06:49:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:17.529 malloc1 00:23:17.529 06:49:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.529 06:49:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:17.529 06:49:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.529 06:49:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:17.529 [2024-12-06 06:49:35.960137] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:17.529 [2024-12-06 06:49:35.960422] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:17.529 [2024-12-06 06:49:35.960499] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:17.529 [2024-12-06 06:49:35.960803] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:17.529 [2024-12-06 06:49:35.963626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:17.529 [2024-12-06 06:49:35.963670] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:17.529 pt1 00:23:17.529 06:49:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.529 06:49:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:17.529 06:49:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:17.529 06:49:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:23:17.529 06:49:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:23:17.529 06:49:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:17.529 06:49:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:17.529 06:49:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:17.529 06:49:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:17.529 06:49:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:23:17.529 06:49:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.529 06:49:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:17.529 malloc2 00:23:17.529 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.529 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:17.529 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.529 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:17.529 [2024-12-06 06:49:36.018361] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:17.529 [2024-12-06 06:49:36.018451] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:17.529 [2024-12-06 06:49:36.018484] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:17.529 [2024-12-06 06:49:36.018499] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:17.529 [2024-12-06 06:49:36.021244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:17.529 [2024-12-06 06:49:36.021449] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:17.529 pt2 00:23:17.529 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.529 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:17.529 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:17.529 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:23:17.529 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.529 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:17.529 [2024-12-06 06:49:36.026376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:17.529 [2024-12-06 06:49:36.029089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:17.529 [2024-12-06 06:49:36.029471] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:17.529 [2024-12-06 06:49:36.029622] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:17.529 [2024-12-06 06:49:36.029778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:17.529 [2024-12-06 06:49:36.030053] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:17.529 [2024-12-06 06:49:36.030082] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:17.529 [2024-12-06 06:49:36.030264] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:17.529 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.529 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:17.529 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:17.529 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:17.529 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:17.530 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:17.530 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:17.530 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:17.530 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:17.530 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:17.530 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:17.530 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:17.530 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.530 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.530 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:17.530 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.530 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:17.530 "name": "raid_bdev1", 00:23:17.530 "uuid": "d005e423-9547-4825-9b15-3dafd1c4db61", 00:23:17.530 "strip_size_kb": 0, 00:23:17.530 "state": "online", 00:23:17.530 "raid_level": "raid1", 00:23:17.530 "superblock": true, 00:23:17.530 "num_base_bdevs": 2, 00:23:17.530 "num_base_bdevs_discovered": 2, 00:23:17.530 "num_base_bdevs_operational": 2, 00:23:17.530 "base_bdevs_list": [ 00:23:17.530 { 00:23:17.530 "name": "pt1", 00:23:17.530 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:17.530 "is_configured": true, 00:23:17.530 "data_offset": 256, 00:23:17.530 "data_size": 7936 00:23:17.530 }, 00:23:17.530 { 00:23:17.530 "name": "pt2", 00:23:17.530 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:17.530 "is_configured": true, 00:23:17.530 "data_offset": 256, 00:23:17.530 "data_size": 7936 00:23:17.530 } 00:23:17.530 ] 00:23:17.530 }' 00:23:17.530 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:17.530 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.097 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:23:18.097 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:18.097 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:18.097 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:18.097 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:23:18.097 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:18.097 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:18.097 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.097 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:18.097 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.097 [2024-12-06 06:49:36.566985] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:18.097 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.097 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:18.097 "name": "raid_bdev1", 00:23:18.097 "aliases": [ 00:23:18.097 "d005e423-9547-4825-9b15-3dafd1c4db61" 00:23:18.097 ], 00:23:18.097 "product_name": "Raid Volume", 00:23:18.097 "block_size": 4096, 00:23:18.097 "num_blocks": 7936, 00:23:18.097 "uuid": "d005e423-9547-4825-9b15-3dafd1c4db61", 00:23:18.097 "md_size": 32, 00:23:18.097 "md_interleave": false, 00:23:18.097 "dif_type": 0, 00:23:18.097 "assigned_rate_limits": { 00:23:18.097 "rw_ios_per_sec": 0, 00:23:18.097 "rw_mbytes_per_sec": 0, 00:23:18.097 "r_mbytes_per_sec": 0, 00:23:18.097 "w_mbytes_per_sec": 0 00:23:18.097 }, 00:23:18.097 "claimed": false, 00:23:18.097 "zoned": false, 00:23:18.097 "supported_io_types": { 00:23:18.097 "read": true, 00:23:18.097 "write": true, 00:23:18.097 "unmap": false, 00:23:18.097 "flush": false, 00:23:18.097 "reset": true, 00:23:18.097 "nvme_admin": false, 00:23:18.097 "nvme_io": false, 00:23:18.097 "nvme_io_md": false, 00:23:18.097 "write_zeroes": true, 00:23:18.097 "zcopy": false, 00:23:18.097 "get_zone_info": false, 00:23:18.097 "zone_management": false, 00:23:18.097 "zone_append": false, 00:23:18.097 "compare": false, 00:23:18.097 "compare_and_write": false, 00:23:18.097 "abort": false, 00:23:18.097 "seek_hole": false, 00:23:18.097 "seek_data": false, 00:23:18.097 "copy": false, 00:23:18.097 "nvme_iov_md": false 00:23:18.097 }, 00:23:18.097 "memory_domains": [ 00:23:18.097 { 00:23:18.097 "dma_device_id": "system", 00:23:18.097 "dma_device_type": 1 00:23:18.097 }, 00:23:18.097 { 00:23:18.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:18.097 "dma_device_type": 2 00:23:18.097 }, 00:23:18.097 { 00:23:18.097 "dma_device_id": "system", 00:23:18.097 "dma_device_type": 1 00:23:18.097 }, 00:23:18.097 { 00:23:18.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:18.097 "dma_device_type": 2 00:23:18.097 } 00:23:18.097 ], 00:23:18.097 "driver_specific": { 00:23:18.097 "raid": { 00:23:18.097 "uuid": "d005e423-9547-4825-9b15-3dafd1c4db61", 00:23:18.097 "strip_size_kb": 0, 00:23:18.097 "state": "online", 00:23:18.097 "raid_level": "raid1", 00:23:18.097 "superblock": true, 00:23:18.097 "num_base_bdevs": 2, 00:23:18.097 "num_base_bdevs_discovered": 2, 00:23:18.097 "num_base_bdevs_operational": 2, 00:23:18.097 "base_bdevs_list": [ 00:23:18.097 { 00:23:18.097 "name": "pt1", 00:23:18.097 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:18.097 "is_configured": true, 00:23:18.097 "data_offset": 256, 00:23:18.097 "data_size": 7936 00:23:18.097 }, 00:23:18.097 { 00:23:18.097 "name": "pt2", 00:23:18.097 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:18.097 "is_configured": true, 00:23:18.097 "data_offset": 256, 00:23:18.097 "data_size": 7936 00:23:18.097 } 00:23:18.097 ] 00:23:18.097 } 00:23:18.097 } 00:23:18.097 }' 00:23:18.097 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:18.097 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:18.097 pt2' 00:23:18.097 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:18.097 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:23:18.097 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:18.097 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:18.097 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.097 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.097 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:18.097 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:23:18.356 [2024-12-06 06:49:36.838951] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d005e423-9547-4825-9b15-3dafd1c4db61 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z d005e423-9547-4825-9b15-3dafd1c4db61 ']' 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.356 [2024-12-06 06:49:36.890658] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:18.356 [2024-12-06 06:49:36.890843] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:18.356 [2024-12-06 06:49:36.891090] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:18.356 [2024-12-06 06:49:36.891284] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:18.356 [2024-12-06 06:49:36.891435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:23:18.356 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.357 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.357 06:49:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.615 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:23:18.615 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:18.615 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:23:18.615 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:18.615 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:18.615 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:18.615 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:18.615 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:18.615 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:18.615 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.615 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.615 [2024-12-06 06:49:37.034675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:18.615 [2024-12-06 06:49:37.037645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:18.615 [2024-12-06 06:49:37.037878] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:18.615 [2024-12-06 06:49:37.037989] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:18.615 [2024-12-06 06:49:37.038019] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:18.615 [2024-12-06 06:49:37.038037] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:23:18.615 request: 00:23:18.615 { 00:23:18.615 "name": "raid_bdev1", 00:23:18.615 "raid_level": "raid1", 00:23:18.615 "base_bdevs": [ 00:23:18.615 "malloc1", 00:23:18.615 "malloc2" 00:23:18.615 ], 00:23:18.615 "superblock": false, 00:23:18.615 "method": "bdev_raid_create", 00:23:18.615 "req_id": 1 00:23:18.615 } 00:23:18.615 Got JSON-RPC error response 00:23:18.615 response: 00:23:18.615 { 00:23:18.615 "code": -17, 00:23:18.615 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:18.615 } 00:23:18.615 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:18.615 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:23:18.615 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:18.615 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:18.615 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:18.615 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.615 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.615 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.615 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:23:18.615 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.615 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:23:18.615 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:23:18.615 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:18.615 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.615 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.615 [2024-12-06 06:49:37.102842] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:18.615 [2024-12-06 06:49:37.103171] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:18.615 [2024-12-06 06:49:37.103247] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:18.615 [2024-12-06 06:49:37.103455] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:18.615 [2024-12-06 06:49:37.106749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:18.615 [2024-12-06 06:49:37.106909] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:18.615 [2024-12-06 06:49:37.107189] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:18.615 [2024-12-06 06:49:37.107374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:18.615 pt1 00:23:18.615 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.616 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:18.616 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:18.616 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:18.616 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:18.616 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:18.616 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:18.616 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:18.616 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:18.616 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:18.616 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:18.616 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.616 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.616 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.616 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.616 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.616 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:18.616 "name": "raid_bdev1", 00:23:18.616 "uuid": "d005e423-9547-4825-9b15-3dafd1c4db61", 00:23:18.616 "strip_size_kb": 0, 00:23:18.616 "state": "configuring", 00:23:18.616 "raid_level": "raid1", 00:23:18.616 "superblock": true, 00:23:18.616 "num_base_bdevs": 2, 00:23:18.616 "num_base_bdevs_discovered": 1, 00:23:18.616 "num_base_bdevs_operational": 2, 00:23:18.616 "base_bdevs_list": [ 00:23:18.616 { 00:23:18.616 "name": "pt1", 00:23:18.616 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:18.616 "is_configured": true, 00:23:18.616 "data_offset": 256, 00:23:18.616 "data_size": 7936 00:23:18.616 }, 00:23:18.616 { 00:23:18.616 "name": null, 00:23:18.616 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:18.616 "is_configured": false, 00:23:18.616 "data_offset": 256, 00:23:18.616 "data_size": 7936 00:23:18.616 } 00:23:18.616 ] 00:23:18.616 }' 00:23:18.616 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:18.616 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:19.184 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:23:19.184 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:23:19.184 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:19.184 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:19.184 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.184 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:19.184 [2024-12-06 06:49:37.667499] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:19.184 [2024-12-06 06:49:37.667895] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.184 [2024-12-06 06:49:37.667972] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:19.184 [2024-12-06 06:49:37.668272] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.184 [2024-12-06 06:49:37.668681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.184 [2024-12-06 06:49:37.668853] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:19.184 [2024-12-06 06:49:37.669041] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:19.184 [2024-12-06 06:49:37.669192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:19.184 [2024-12-06 06:49:37.669454] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:19.184 [2024-12-06 06:49:37.669608] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:19.184 [2024-12-06 06:49:37.669822] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:19.184 [2024-12-06 06:49:37.670102] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:19.184 [2024-12-06 06:49:37.670227] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:23:19.184 [2024-12-06 06:49:37.670492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:19.184 pt2 00:23:19.184 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.184 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:19.184 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:19.184 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:19.184 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:19.184 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:19.184 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:19.184 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:19.184 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:19.184 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:19.184 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:19.184 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:19.184 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:19.184 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:19.184 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.184 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.184 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:19.184 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.184 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:19.184 "name": "raid_bdev1", 00:23:19.184 "uuid": "d005e423-9547-4825-9b15-3dafd1c4db61", 00:23:19.184 "strip_size_kb": 0, 00:23:19.184 "state": "online", 00:23:19.184 "raid_level": "raid1", 00:23:19.184 "superblock": true, 00:23:19.184 "num_base_bdevs": 2, 00:23:19.184 "num_base_bdevs_discovered": 2, 00:23:19.184 "num_base_bdevs_operational": 2, 00:23:19.184 "base_bdevs_list": [ 00:23:19.184 { 00:23:19.184 "name": "pt1", 00:23:19.184 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:19.184 "is_configured": true, 00:23:19.184 "data_offset": 256, 00:23:19.184 "data_size": 7936 00:23:19.184 }, 00:23:19.184 { 00:23:19.184 "name": "pt2", 00:23:19.185 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:19.185 "is_configured": true, 00:23:19.185 "data_offset": 256, 00:23:19.185 "data_size": 7936 00:23:19.185 } 00:23:19.185 ] 00:23:19.185 }' 00:23:19.185 06:49:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:19.185 06:49:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:19.753 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:23:19.753 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:19.753 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:19.753 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:19.753 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:23:19.753 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:19.753 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:19.753 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:19.753 06:49:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.753 06:49:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:19.753 [2024-12-06 06:49:38.196024] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:19.753 06:49:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.753 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:19.753 "name": "raid_bdev1", 00:23:19.753 "aliases": [ 00:23:19.753 "d005e423-9547-4825-9b15-3dafd1c4db61" 00:23:19.753 ], 00:23:19.753 "product_name": "Raid Volume", 00:23:19.753 "block_size": 4096, 00:23:19.753 "num_blocks": 7936, 00:23:19.753 "uuid": "d005e423-9547-4825-9b15-3dafd1c4db61", 00:23:19.753 "md_size": 32, 00:23:19.753 "md_interleave": false, 00:23:19.753 "dif_type": 0, 00:23:19.753 "assigned_rate_limits": { 00:23:19.753 "rw_ios_per_sec": 0, 00:23:19.753 "rw_mbytes_per_sec": 0, 00:23:19.753 "r_mbytes_per_sec": 0, 00:23:19.753 "w_mbytes_per_sec": 0 00:23:19.753 }, 00:23:19.753 "claimed": false, 00:23:19.753 "zoned": false, 00:23:19.753 "supported_io_types": { 00:23:19.753 "read": true, 00:23:19.753 "write": true, 00:23:19.753 "unmap": false, 00:23:19.753 "flush": false, 00:23:19.753 "reset": true, 00:23:19.753 "nvme_admin": false, 00:23:19.753 "nvme_io": false, 00:23:19.753 "nvme_io_md": false, 00:23:19.753 "write_zeroes": true, 00:23:19.753 "zcopy": false, 00:23:19.753 "get_zone_info": false, 00:23:19.753 "zone_management": false, 00:23:19.753 "zone_append": false, 00:23:19.753 "compare": false, 00:23:19.753 "compare_and_write": false, 00:23:19.753 "abort": false, 00:23:19.753 "seek_hole": false, 00:23:19.753 "seek_data": false, 00:23:19.753 "copy": false, 00:23:19.753 "nvme_iov_md": false 00:23:19.753 }, 00:23:19.753 "memory_domains": [ 00:23:19.753 { 00:23:19.753 "dma_device_id": "system", 00:23:19.753 "dma_device_type": 1 00:23:19.753 }, 00:23:19.753 { 00:23:19.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:19.753 "dma_device_type": 2 00:23:19.753 }, 00:23:19.753 { 00:23:19.753 "dma_device_id": "system", 00:23:19.753 "dma_device_type": 1 00:23:19.753 }, 00:23:19.753 { 00:23:19.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:19.753 "dma_device_type": 2 00:23:19.753 } 00:23:19.753 ], 00:23:19.753 "driver_specific": { 00:23:19.753 "raid": { 00:23:19.753 "uuid": "d005e423-9547-4825-9b15-3dafd1c4db61", 00:23:19.753 "strip_size_kb": 0, 00:23:19.753 "state": "online", 00:23:19.753 "raid_level": "raid1", 00:23:19.753 "superblock": true, 00:23:19.753 "num_base_bdevs": 2, 00:23:19.753 "num_base_bdevs_discovered": 2, 00:23:19.753 "num_base_bdevs_operational": 2, 00:23:19.753 "base_bdevs_list": [ 00:23:19.753 { 00:23:19.753 "name": "pt1", 00:23:19.753 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:19.753 "is_configured": true, 00:23:19.753 "data_offset": 256, 00:23:19.753 "data_size": 7936 00:23:19.753 }, 00:23:19.753 { 00:23:19.753 "name": "pt2", 00:23:19.753 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:19.753 "is_configured": true, 00:23:19.753 "data_offset": 256, 00:23:19.753 "data_size": 7936 00:23:19.753 } 00:23:19.753 ] 00:23:19.753 } 00:23:19.753 } 00:23:19.753 }' 00:23:19.753 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:19.753 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:19.753 pt2' 00:23:19.753 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:19.753 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:23:19.753 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:19.753 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:19.753 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:19.753 06:49:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.753 06:49:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:19.753 06:49:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.753 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:19.753 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:19.753 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:19.753 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:19.753 06:49:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.753 06:49:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:19.753 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:20.012 06:49:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.012 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:20.012 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:20.012 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:20.012 06:49:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.012 06:49:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:20.012 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:23:20.012 [2024-12-06 06:49:38.452181] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:20.012 06:49:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.012 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' d005e423-9547-4825-9b15-3dafd1c4db61 '!=' d005e423-9547-4825-9b15-3dafd1c4db61 ']' 00:23:20.012 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:23:20.012 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:20.012 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:23:20.012 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:23:20.012 06:49:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.012 06:49:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:20.012 [2024-12-06 06:49:38.503821] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:20.012 06:49:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.012 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:20.012 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:20.012 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:20.012 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:20.012 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:20.013 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:20.013 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:20.013 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:20.013 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:20.013 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:20.013 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.013 06:49:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.013 06:49:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:20.013 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.013 06:49:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.013 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:20.013 "name": "raid_bdev1", 00:23:20.013 "uuid": "d005e423-9547-4825-9b15-3dafd1c4db61", 00:23:20.013 "strip_size_kb": 0, 00:23:20.013 "state": "online", 00:23:20.013 "raid_level": "raid1", 00:23:20.013 "superblock": true, 00:23:20.013 "num_base_bdevs": 2, 00:23:20.013 "num_base_bdevs_discovered": 1, 00:23:20.013 "num_base_bdevs_operational": 1, 00:23:20.013 "base_bdevs_list": [ 00:23:20.013 { 00:23:20.013 "name": null, 00:23:20.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.013 "is_configured": false, 00:23:20.013 "data_offset": 0, 00:23:20.013 "data_size": 7936 00:23:20.013 }, 00:23:20.013 { 00:23:20.013 "name": "pt2", 00:23:20.013 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:20.013 "is_configured": true, 00:23:20.013 "data_offset": 256, 00:23:20.013 "data_size": 7936 00:23:20.013 } 00:23:20.013 ] 00:23:20.013 }' 00:23:20.013 06:49:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:20.013 06:49:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:20.580 [2024-12-06 06:49:39.016487] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:20.580 [2024-12-06 06:49:39.016568] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:20.580 [2024-12-06 06:49:39.016691] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:20.580 [2024-12-06 06:49:39.016789] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:20.580 [2024-12-06 06:49:39.016811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:20.580 [2024-12-06 06:49:39.084425] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:20.580 [2024-12-06 06:49:39.084519] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:20.580 [2024-12-06 06:49:39.084560] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:23:20.580 [2024-12-06 06:49:39.084578] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:20.580 [2024-12-06 06:49:39.087519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:20.580 [2024-12-06 06:49:39.087586] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:20.580 [2024-12-06 06:49:39.087664] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:20.580 [2024-12-06 06:49:39.087733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:20.580 [2024-12-06 06:49:39.087894] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:20.580 [2024-12-06 06:49:39.087916] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:20.580 [2024-12-06 06:49:39.088006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:20.580 [2024-12-06 06:49:39.088158] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:20.580 [2024-12-06 06:49:39.088173] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:23:20.580 [2024-12-06 06:49:39.088310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:20.580 pt2 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:20.580 "name": "raid_bdev1", 00:23:20.580 "uuid": "d005e423-9547-4825-9b15-3dafd1c4db61", 00:23:20.580 "strip_size_kb": 0, 00:23:20.580 "state": "online", 00:23:20.580 "raid_level": "raid1", 00:23:20.580 "superblock": true, 00:23:20.580 "num_base_bdevs": 2, 00:23:20.580 "num_base_bdevs_discovered": 1, 00:23:20.580 "num_base_bdevs_operational": 1, 00:23:20.580 "base_bdevs_list": [ 00:23:20.580 { 00:23:20.580 "name": null, 00:23:20.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.580 "is_configured": false, 00:23:20.580 "data_offset": 256, 00:23:20.580 "data_size": 7936 00:23:20.580 }, 00:23:20.580 { 00:23:20.580 "name": "pt2", 00:23:20.580 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:20.580 "is_configured": true, 00:23:20.580 "data_offset": 256, 00:23:20.580 "data_size": 7936 00:23:20.580 } 00:23:20.580 ] 00:23:20.580 }' 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:20.580 06:49:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:21.147 [2024-12-06 06:49:39.644640] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:21.147 [2024-12-06 06:49:39.644699] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:21.147 [2024-12-06 06:49:39.644818] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:21.147 [2024-12-06 06:49:39.644900] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:21.147 [2024-12-06 06:49:39.644917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:21.147 [2024-12-06 06:49:39.704719] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:21.147 [2024-12-06 06:49:39.704828] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:21.147 [2024-12-06 06:49:39.704862] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:23:21.147 [2024-12-06 06:49:39.704878] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:21.147 [2024-12-06 06:49:39.707842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:21.147 [2024-12-06 06:49:39.707890] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:21.147 [2024-12-06 06:49:39.707983] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:21.147 [2024-12-06 06:49:39.708047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:21.147 [2024-12-06 06:49:39.708228] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:21.147 [2024-12-06 06:49:39.708247] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:21.147 [2024-12-06 06:49:39.708277] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:23:21.147 [2024-12-06 06:49:39.708363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:21.147 [2024-12-06 06:49:39.708474] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:23:21.147 [2024-12-06 06:49:39.708489] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:21.147 [2024-12-06 06:49:39.708600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:21.147 [2024-12-06 06:49:39.708746] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:23:21.147 [2024-12-06 06:49:39.708765] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:23:21.147 [2024-12-06 06:49:39.708965] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:21.147 pt1 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:21.147 "name": "raid_bdev1", 00:23:21.147 "uuid": "d005e423-9547-4825-9b15-3dafd1c4db61", 00:23:21.147 "strip_size_kb": 0, 00:23:21.147 "state": "online", 00:23:21.147 "raid_level": "raid1", 00:23:21.147 "superblock": true, 00:23:21.147 "num_base_bdevs": 2, 00:23:21.147 "num_base_bdevs_discovered": 1, 00:23:21.147 "num_base_bdevs_operational": 1, 00:23:21.147 "base_bdevs_list": [ 00:23:21.147 { 00:23:21.147 "name": null, 00:23:21.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.147 "is_configured": false, 00:23:21.147 "data_offset": 256, 00:23:21.147 "data_size": 7936 00:23:21.147 }, 00:23:21.147 { 00:23:21.147 "name": "pt2", 00:23:21.147 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:21.147 "is_configured": true, 00:23:21.147 "data_offset": 256, 00:23:21.147 "data_size": 7936 00:23:21.147 } 00:23:21.147 ] 00:23:21.147 }' 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:21.147 06:49:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:21.713 06:49:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:21.713 06:49:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:23:21.714 06:49:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.714 06:49:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:21.714 06:49:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.714 06:49:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:23:21.714 06:49:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:23:21.714 06:49:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:21.714 06:49:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.714 06:49:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:21.714 [2024-12-06 06:49:40.273446] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:21.714 06:49:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.714 06:49:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' d005e423-9547-4825-9b15-3dafd1c4db61 '!=' d005e423-9547-4825-9b15-3dafd1c4db61 ']' 00:23:21.714 06:49:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 88068 00:23:21.714 06:49:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88068 ']' 00:23:21.714 06:49:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 88068 00:23:21.714 06:49:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:23:21.714 06:49:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.714 06:49:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88068 00:23:21.714 06:49:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:21.714 killing process with pid 88068 00:23:21.714 06:49:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:21.714 06:49:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88068' 00:23:21.714 06:49:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 88068 00:23:21.714 [2024-12-06 06:49:40.346743] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:21.714 06:49:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 88068 00:23:21.714 [2024-12-06 06:49:40.346899] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:21.714 [2024-12-06 06:49:40.346988] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:21.714 [2024-12-06 06:49:40.347020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:23:21.972 [2024-12-06 06:49:40.561738] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:23.348 06:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:23:23.348 00:23:23.348 real 0m6.977s 00:23:23.348 user 0m10.949s 00:23:23.348 sys 0m1.082s 00:23:23.348 06:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:23.348 06:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:23.348 ************************************ 00:23:23.348 END TEST raid_superblock_test_md_separate 00:23:23.348 ************************************ 00:23:23.348 06:49:41 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:23:23.348 06:49:41 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:23:23.348 06:49:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:23:23.348 06:49:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:23.348 06:49:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:23.348 ************************************ 00:23:23.348 START TEST raid_rebuild_test_sb_md_separate 00:23:23.348 ************************************ 00:23:23.348 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:23:23.348 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:23:23.348 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:23:23.348 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:23:23.348 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:23:23.348 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:23:23.348 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:23.348 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:23.348 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:23.348 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:23.348 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:23.348 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:23.348 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:23.348 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:23.348 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:23.348 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:23.348 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:23.348 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:23.348 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:23.348 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:23.348 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:23.348 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:23:23.349 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:23:23.349 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:23:23.349 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:23:23.349 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88398 00:23:23.349 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88398 00:23:23.349 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:23.349 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88398 ']' 00:23:23.349 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.349 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:23.349 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.349 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:23.349 06:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:23.349 [2024-12-06 06:49:41.901738] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:23:23.349 [2024-12-06 06:49:41.901987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88398 ] 00:23:23.349 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:23.349 Zero copy mechanism will not be used. 00:23:23.608 [2024-12-06 06:49:42.090641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.608 [2024-12-06 06:49:42.237871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.866 [2024-12-06 06:49:42.464107] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:23.866 [2024-12-06 06:49:42.464221] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:24.432 06:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.432 06:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:23:24.432 06:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:24.432 06:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:23:24.432 06:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.432 06:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:24.432 BaseBdev1_malloc 00:23:24.432 06:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.432 06:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:24.432 06:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.432 06:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:24.432 [2024-12-06 06:49:42.951117] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:24.432 [2024-12-06 06:49:42.951207] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:24.432 [2024-12-06 06:49:42.951244] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:24.432 [2024-12-06 06:49:42.951265] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:24.432 [2024-12-06 06:49:42.953992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:24.432 [2024-12-06 06:49:42.954052] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:24.432 BaseBdev1 00:23:24.432 06:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.432 06:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:24.432 06:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:23:24.432 06:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.432 06:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:24.432 BaseBdev2_malloc 00:23:24.432 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.432 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:24.432 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.432 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:24.432 [2024-12-06 06:49:43.013049] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:24.432 [2024-12-06 06:49:43.013131] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:24.432 [2024-12-06 06:49:43.013161] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:24.432 [2024-12-06 06:49:43.013180] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:24.432 [2024-12-06 06:49:43.015971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:24.432 [2024-12-06 06:49:43.016017] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:24.432 BaseBdev2 00:23:24.432 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.432 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:23:24.432 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.432 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:24.432 spare_malloc 00:23:24.432 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.432 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:24.432 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.432 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:24.689 spare_delay 00:23:24.689 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.689 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:24.689 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.689 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:24.689 [2024-12-06 06:49:43.088284] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:24.689 [2024-12-06 06:49:43.088363] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:24.689 [2024-12-06 06:49:43.088395] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:24.689 [2024-12-06 06:49:43.088414] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:24.689 [2024-12-06 06:49:43.091210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:24.689 [2024-12-06 06:49:43.091258] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:24.689 spare 00:23:24.689 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.689 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:23:24.689 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.689 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:24.690 [2024-12-06 06:49:43.096335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:24.690 [2024-12-06 06:49:43.099126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:24.690 [2024-12-06 06:49:43.099389] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:24.690 [2024-12-06 06:49:43.099413] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:24.690 [2024-12-06 06:49:43.099530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:24.690 [2024-12-06 06:49:43.100064] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:24.690 [2024-12-06 06:49:43.100119] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:24.690 [2024-12-06 06:49:43.100448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:24.690 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.690 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:24.690 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:24.690 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:24.690 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:24.690 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:24.690 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:24.690 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:24.690 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:24.690 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:24.690 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:24.690 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:24.690 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.690 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:24.690 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:24.690 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.690 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:24.690 "name": "raid_bdev1", 00:23:24.690 "uuid": "464c975c-d277-4db3-a0ce-b13205b42e1f", 00:23:24.690 "strip_size_kb": 0, 00:23:24.690 "state": "online", 00:23:24.690 "raid_level": "raid1", 00:23:24.690 "superblock": true, 00:23:24.690 "num_base_bdevs": 2, 00:23:24.690 "num_base_bdevs_discovered": 2, 00:23:24.690 "num_base_bdevs_operational": 2, 00:23:24.690 "base_bdevs_list": [ 00:23:24.690 { 00:23:24.690 "name": "BaseBdev1", 00:23:24.690 "uuid": "73303a7b-c314-5daa-becf-6bdf95e0f658", 00:23:24.690 "is_configured": true, 00:23:24.690 "data_offset": 256, 00:23:24.690 "data_size": 7936 00:23:24.690 }, 00:23:24.690 { 00:23:24.690 "name": "BaseBdev2", 00:23:24.690 "uuid": "9e831c17-abd6-57cd-b811-7927e847cc22", 00:23:24.690 "is_configured": true, 00:23:24.690 "data_offset": 256, 00:23:24.690 "data_size": 7936 00:23:24.690 } 00:23:24.690 ] 00:23:24.690 }' 00:23:24.690 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:24.690 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:25.256 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:25.256 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.256 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:25.256 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:25.256 [2024-12-06 06:49:43.609075] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:25.256 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.256 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:23:25.256 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:25.256 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:25.256 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.256 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:25.256 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.256 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:23:25.257 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:23:25.257 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:23:25.257 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:23:25.257 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:23:25.257 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:25.257 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:25.257 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:25.257 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:25.257 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:25.257 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:23:25.257 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:25.257 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:25.257 06:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:25.515 [2024-12-06 06:49:43.996979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:25.515 /dev/nbd0 00:23:25.515 06:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:25.515 06:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:25.515 06:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:25.515 06:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:23:25.515 06:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:25.515 06:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:25.515 06:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:25.515 06:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:23:25.515 06:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:25.515 06:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:25.515 06:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:25.515 1+0 records in 00:23:25.515 1+0 records out 00:23:25.515 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258681 s, 15.8 MB/s 00:23:25.515 06:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:25.515 06:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:23:25.515 06:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:25.515 06:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:25.515 06:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:23:25.515 06:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:25.515 06:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:25.515 06:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:23:25.515 06:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:23:25.515 06:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:23:26.449 7936+0 records in 00:23:26.449 7936+0 records out 00:23:26.449 32505856 bytes (33 MB, 31 MiB) copied, 0.917907 s, 35.4 MB/s 00:23:26.449 06:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:26.449 06:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:26.449 06:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:26.449 06:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:26.449 06:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:23:26.449 06:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:26.449 06:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:26.708 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:26.708 [2024-12-06 06:49:45.307463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:26.708 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:26.708 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:26.708 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:26.708 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:26.708 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:26.708 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:23:26.708 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:23:26.708 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:26.708 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.708 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:26.708 [2024-12-06 06:49:45.327622] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:26.708 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.708 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:26.708 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:26.708 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:26.708 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:26.708 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:26.708 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:26.708 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:26.708 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:26.708 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:26.708 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:26.708 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:26.708 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.708 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:26.708 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.708 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.966 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:26.966 "name": "raid_bdev1", 00:23:26.966 "uuid": "464c975c-d277-4db3-a0ce-b13205b42e1f", 00:23:26.966 "strip_size_kb": 0, 00:23:26.966 "state": "online", 00:23:26.966 "raid_level": "raid1", 00:23:26.966 "superblock": true, 00:23:26.966 "num_base_bdevs": 2, 00:23:26.966 "num_base_bdevs_discovered": 1, 00:23:26.966 "num_base_bdevs_operational": 1, 00:23:26.966 "base_bdevs_list": [ 00:23:26.966 { 00:23:26.966 "name": null, 00:23:26.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.966 "is_configured": false, 00:23:26.966 "data_offset": 0, 00:23:26.966 "data_size": 7936 00:23:26.966 }, 00:23:26.966 { 00:23:26.966 "name": "BaseBdev2", 00:23:26.966 "uuid": "9e831c17-abd6-57cd-b811-7927e847cc22", 00:23:26.966 "is_configured": true, 00:23:26.966 "data_offset": 256, 00:23:26.966 "data_size": 7936 00:23:26.966 } 00:23:26.966 ] 00:23:26.966 }' 00:23:26.966 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:26.966 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:27.530 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:27.530 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.530 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:27.530 [2024-12-06 06:49:45.919843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:27.531 [2024-12-06 06:49:45.934601] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:23:27.531 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.531 06:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:27.531 [2024-12-06 06:49:45.937396] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:28.463 06:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:28.463 06:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:28.463 06:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:28.463 06:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:28.463 06:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:28.463 06:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.463 06:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.463 06:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:28.463 06:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.463 06:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.463 06:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:28.463 "name": "raid_bdev1", 00:23:28.463 "uuid": "464c975c-d277-4db3-a0ce-b13205b42e1f", 00:23:28.463 "strip_size_kb": 0, 00:23:28.463 "state": "online", 00:23:28.463 "raid_level": "raid1", 00:23:28.463 "superblock": true, 00:23:28.463 "num_base_bdevs": 2, 00:23:28.463 "num_base_bdevs_discovered": 2, 00:23:28.463 "num_base_bdevs_operational": 2, 00:23:28.463 "process": { 00:23:28.463 "type": "rebuild", 00:23:28.463 "target": "spare", 00:23:28.463 "progress": { 00:23:28.463 "blocks": 2304, 00:23:28.463 "percent": 29 00:23:28.463 } 00:23:28.463 }, 00:23:28.463 "base_bdevs_list": [ 00:23:28.463 { 00:23:28.463 "name": "spare", 00:23:28.463 "uuid": "138c3ec6-f0cf-5807-99d6-8ac66a5539cc", 00:23:28.463 "is_configured": true, 00:23:28.463 "data_offset": 256, 00:23:28.463 "data_size": 7936 00:23:28.463 }, 00:23:28.463 { 00:23:28.463 "name": "BaseBdev2", 00:23:28.463 "uuid": "9e831c17-abd6-57cd-b811-7927e847cc22", 00:23:28.463 "is_configured": true, 00:23:28.463 "data_offset": 256, 00:23:28.463 "data_size": 7936 00:23:28.463 } 00:23:28.463 ] 00:23:28.463 }' 00:23:28.463 06:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:28.463 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:28.463 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:28.463 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:28.463 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:28.463 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.463 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:28.720 [2024-12-06 06:49:47.111413] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:28.720 [2024-12-06 06:49:47.149476] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:28.720 [2024-12-06 06:49:47.149755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:28.720 [2024-12-06 06:49:47.149784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:28.720 [2024-12-06 06:49:47.149801] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:28.720 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.720 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:28.720 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:28.720 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:28.720 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:28.720 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:28.720 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:28.721 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:28.721 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:28.721 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:28.721 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:28.721 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.721 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.721 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.721 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:28.721 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.721 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:28.721 "name": "raid_bdev1", 00:23:28.721 "uuid": "464c975c-d277-4db3-a0ce-b13205b42e1f", 00:23:28.721 "strip_size_kb": 0, 00:23:28.721 "state": "online", 00:23:28.721 "raid_level": "raid1", 00:23:28.721 "superblock": true, 00:23:28.721 "num_base_bdevs": 2, 00:23:28.721 "num_base_bdevs_discovered": 1, 00:23:28.721 "num_base_bdevs_operational": 1, 00:23:28.721 "base_bdevs_list": [ 00:23:28.721 { 00:23:28.721 "name": null, 00:23:28.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.721 "is_configured": false, 00:23:28.721 "data_offset": 0, 00:23:28.721 "data_size": 7936 00:23:28.721 }, 00:23:28.721 { 00:23:28.721 "name": "BaseBdev2", 00:23:28.721 "uuid": "9e831c17-abd6-57cd-b811-7927e847cc22", 00:23:28.721 "is_configured": true, 00:23:28.721 "data_offset": 256, 00:23:28.721 "data_size": 7936 00:23:28.721 } 00:23:28.721 ] 00:23:28.721 }' 00:23:28.721 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:28.721 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:29.286 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:29.286 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:29.286 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:29.286 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:29.286 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:29.286 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.286 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.286 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.286 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:29.286 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.286 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:29.286 "name": "raid_bdev1", 00:23:29.286 "uuid": "464c975c-d277-4db3-a0ce-b13205b42e1f", 00:23:29.286 "strip_size_kb": 0, 00:23:29.286 "state": "online", 00:23:29.286 "raid_level": "raid1", 00:23:29.286 "superblock": true, 00:23:29.286 "num_base_bdevs": 2, 00:23:29.286 "num_base_bdevs_discovered": 1, 00:23:29.286 "num_base_bdevs_operational": 1, 00:23:29.286 "base_bdevs_list": [ 00:23:29.286 { 00:23:29.286 "name": null, 00:23:29.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.286 "is_configured": false, 00:23:29.286 "data_offset": 0, 00:23:29.286 "data_size": 7936 00:23:29.286 }, 00:23:29.286 { 00:23:29.287 "name": "BaseBdev2", 00:23:29.287 "uuid": "9e831c17-abd6-57cd-b811-7927e847cc22", 00:23:29.287 "is_configured": true, 00:23:29.287 "data_offset": 256, 00:23:29.287 "data_size": 7936 00:23:29.287 } 00:23:29.287 ] 00:23:29.287 }' 00:23:29.287 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:29.287 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:29.287 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:29.287 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:29.287 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:29.287 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.287 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:29.287 [2024-12-06 06:49:47.837250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:29.287 [2024-12-06 06:49:47.850718] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:23:29.287 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.287 06:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:29.287 [2024-12-06 06:49:47.853466] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:30.224 06:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:30.224 06:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:30.224 06:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:30.224 06:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:30.224 06:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:30.224 06:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.224 06:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.224 06:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.224 06:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:30.486 06:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.486 06:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:30.486 "name": "raid_bdev1", 00:23:30.486 "uuid": "464c975c-d277-4db3-a0ce-b13205b42e1f", 00:23:30.486 "strip_size_kb": 0, 00:23:30.486 "state": "online", 00:23:30.486 "raid_level": "raid1", 00:23:30.486 "superblock": true, 00:23:30.486 "num_base_bdevs": 2, 00:23:30.486 "num_base_bdevs_discovered": 2, 00:23:30.486 "num_base_bdevs_operational": 2, 00:23:30.486 "process": { 00:23:30.486 "type": "rebuild", 00:23:30.486 "target": "spare", 00:23:30.486 "progress": { 00:23:30.486 "blocks": 2560, 00:23:30.486 "percent": 32 00:23:30.486 } 00:23:30.486 }, 00:23:30.486 "base_bdevs_list": [ 00:23:30.486 { 00:23:30.486 "name": "spare", 00:23:30.486 "uuid": "138c3ec6-f0cf-5807-99d6-8ac66a5539cc", 00:23:30.486 "is_configured": true, 00:23:30.486 "data_offset": 256, 00:23:30.486 "data_size": 7936 00:23:30.486 }, 00:23:30.486 { 00:23:30.486 "name": "BaseBdev2", 00:23:30.486 "uuid": "9e831c17-abd6-57cd-b811-7927e847cc22", 00:23:30.486 "is_configured": true, 00:23:30.486 "data_offset": 256, 00:23:30.486 "data_size": 7936 00:23:30.486 } 00:23:30.486 ] 00:23:30.486 }' 00:23:30.486 06:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:30.486 06:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:30.486 06:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:30.486 06:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:30.486 06:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:23:30.486 06:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:23:30.486 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:23:30.486 06:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:23:30.486 06:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:23:30.486 06:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:23:30.486 06:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=769 00:23:30.486 06:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:30.486 06:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:30.486 06:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:30.486 06:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:30.486 06:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:30.487 06:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:30.487 06:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.487 06:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.487 06:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:30.487 06:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.487 06:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.487 06:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:30.487 "name": "raid_bdev1", 00:23:30.487 "uuid": "464c975c-d277-4db3-a0ce-b13205b42e1f", 00:23:30.487 "strip_size_kb": 0, 00:23:30.487 "state": "online", 00:23:30.487 "raid_level": "raid1", 00:23:30.487 "superblock": true, 00:23:30.487 "num_base_bdevs": 2, 00:23:30.487 "num_base_bdevs_discovered": 2, 00:23:30.487 "num_base_bdevs_operational": 2, 00:23:30.487 "process": { 00:23:30.487 "type": "rebuild", 00:23:30.487 "target": "spare", 00:23:30.487 "progress": { 00:23:30.487 "blocks": 2816, 00:23:30.487 "percent": 35 00:23:30.487 } 00:23:30.487 }, 00:23:30.487 "base_bdevs_list": [ 00:23:30.487 { 00:23:30.487 "name": "spare", 00:23:30.487 "uuid": "138c3ec6-f0cf-5807-99d6-8ac66a5539cc", 00:23:30.487 "is_configured": true, 00:23:30.487 "data_offset": 256, 00:23:30.487 "data_size": 7936 00:23:30.487 }, 00:23:30.487 { 00:23:30.487 "name": "BaseBdev2", 00:23:30.487 "uuid": "9e831c17-abd6-57cd-b811-7927e847cc22", 00:23:30.487 "is_configured": true, 00:23:30.487 "data_offset": 256, 00:23:30.487 "data_size": 7936 00:23:30.487 } 00:23:30.487 ] 00:23:30.487 }' 00:23:30.487 06:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:30.487 06:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:30.487 06:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:30.745 06:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:30.745 06:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:31.682 06:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:31.682 06:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:31.682 06:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:31.682 06:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:31.682 06:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:31.682 06:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:31.682 06:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.682 06:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.682 06:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.682 06:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:31.682 06:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.682 06:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:31.682 "name": "raid_bdev1", 00:23:31.682 "uuid": "464c975c-d277-4db3-a0ce-b13205b42e1f", 00:23:31.682 "strip_size_kb": 0, 00:23:31.682 "state": "online", 00:23:31.682 "raid_level": "raid1", 00:23:31.682 "superblock": true, 00:23:31.682 "num_base_bdevs": 2, 00:23:31.682 "num_base_bdevs_discovered": 2, 00:23:31.682 "num_base_bdevs_operational": 2, 00:23:31.682 "process": { 00:23:31.682 "type": "rebuild", 00:23:31.682 "target": "spare", 00:23:31.682 "progress": { 00:23:31.682 "blocks": 5888, 00:23:31.682 "percent": 74 00:23:31.682 } 00:23:31.682 }, 00:23:31.682 "base_bdevs_list": [ 00:23:31.682 { 00:23:31.682 "name": "spare", 00:23:31.682 "uuid": "138c3ec6-f0cf-5807-99d6-8ac66a5539cc", 00:23:31.682 "is_configured": true, 00:23:31.682 "data_offset": 256, 00:23:31.682 "data_size": 7936 00:23:31.682 }, 00:23:31.682 { 00:23:31.682 "name": "BaseBdev2", 00:23:31.682 "uuid": "9e831c17-abd6-57cd-b811-7927e847cc22", 00:23:31.682 "is_configured": true, 00:23:31.682 "data_offset": 256, 00:23:31.682 "data_size": 7936 00:23:31.682 } 00:23:31.682 ] 00:23:31.682 }' 00:23:31.682 06:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:31.682 06:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:31.682 06:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:31.682 06:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:31.682 06:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:32.618 [2024-12-06 06:49:50.980112] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:32.618 [2024-12-06 06:49:50.980244] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:32.618 [2024-12-06 06:49:50.980439] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:32.878 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:32.878 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:32.878 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:32.878 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:32.878 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:32.878 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:32.878 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:32.878 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.878 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.878 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:32.878 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.878 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:32.878 "name": "raid_bdev1", 00:23:32.878 "uuid": "464c975c-d277-4db3-a0ce-b13205b42e1f", 00:23:32.878 "strip_size_kb": 0, 00:23:32.878 "state": "online", 00:23:32.878 "raid_level": "raid1", 00:23:32.878 "superblock": true, 00:23:32.878 "num_base_bdevs": 2, 00:23:32.878 "num_base_bdevs_discovered": 2, 00:23:32.878 "num_base_bdevs_operational": 2, 00:23:32.878 "base_bdevs_list": [ 00:23:32.878 { 00:23:32.878 "name": "spare", 00:23:32.878 "uuid": "138c3ec6-f0cf-5807-99d6-8ac66a5539cc", 00:23:32.878 "is_configured": true, 00:23:32.878 "data_offset": 256, 00:23:32.878 "data_size": 7936 00:23:32.878 }, 00:23:32.878 { 00:23:32.878 "name": "BaseBdev2", 00:23:32.878 "uuid": "9e831c17-abd6-57cd-b811-7927e847cc22", 00:23:32.878 "is_configured": true, 00:23:32.878 "data_offset": 256, 00:23:32.878 "data_size": 7936 00:23:32.878 } 00:23:32.878 ] 00:23:32.878 }' 00:23:32.878 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:32.878 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:32.878 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:32.878 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:32.878 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:23:32.878 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:32.878 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:32.878 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:32.878 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:32.878 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:32.878 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.878 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:32.878 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.878 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:32.878 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.137 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:33.137 "name": "raid_bdev1", 00:23:33.137 "uuid": "464c975c-d277-4db3-a0ce-b13205b42e1f", 00:23:33.137 "strip_size_kb": 0, 00:23:33.137 "state": "online", 00:23:33.137 "raid_level": "raid1", 00:23:33.137 "superblock": true, 00:23:33.137 "num_base_bdevs": 2, 00:23:33.137 "num_base_bdevs_discovered": 2, 00:23:33.137 "num_base_bdevs_operational": 2, 00:23:33.137 "base_bdevs_list": [ 00:23:33.137 { 00:23:33.137 "name": "spare", 00:23:33.137 "uuid": "138c3ec6-f0cf-5807-99d6-8ac66a5539cc", 00:23:33.137 "is_configured": true, 00:23:33.137 "data_offset": 256, 00:23:33.137 "data_size": 7936 00:23:33.137 }, 00:23:33.137 { 00:23:33.137 "name": "BaseBdev2", 00:23:33.137 "uuid": "9e831c17-abd6-57cd-b811-7927e847cc22", 00:23:33.137 "is_configured": true, 00:23:33.137 "data_offset": 256, 00:23:33.137 "data_size": 7936 00:23:33.137 } 00:23:33.137 ] 00:23:33.137 }' 00:23:33.137 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:33.137 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:33.137 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:33.137 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:33.137 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:33.137 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:33.137 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:33.137 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:33.137 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:33.137 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:33.137 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:33.137 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:33.137 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:33.137 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:33.137 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.137 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.137 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.137 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:33.137 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.137 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:33.137 "name": "raid_bdev1", 00:23:33.137 "uuid": "464c975c-d277-4db3-a0ce-b13205b42e1f", 00:23:33.137 "strip_size_kb": 0, 00:23:33.137 "state": "online", 00:23:33.137 "raid_level": "raid1", 00:23:33.137 "superblock": true, 00:23:33.137 "num_base_bdevs": 2, 00:23:33.137 "num_base_bdevs_discovered": 2, 00:23:33.137 "num_base_bdevs_operational": 2, 00:23:33.137 "base_bdevs_list": [ 00:23:33.137 { 00:23:33.137 "name": "spare", 00:23:33.137 "uuid": "138c3ec6-f0cf-5807-99d6-8ac66a5539cc", 00:23:33.137 "is_configured": true, 00:23:33.137 "data_offset": 256, 00:23:33.137 "data_size": 7936 00:23:33.137 }, 00:23:33.137 { 00:23:33.137 "name": "BaseBdev2", 00:23:33.137 "uuid": "9e831c17-abd6-57cd-b811-7927e847cc22", 00:23:33.137 "is_configured": true, 00:23:33.137 "data_offset": 256, 00:23:33.137 "data_size": 7936 00:23:33.137 } 00:23:33.137 ] 00:23:33.137 }' 00:23:33.137 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:33.137 06:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:33.704 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:33.704 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.704 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:33.704 [2024-12-06 06:49:52.195283] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:33.704 [2024-12-06 06:49:52.195336] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:33.704 [2024-12-06 06:49:52.195443] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:33.704 [2024-12-06 06:49:52.195564] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:33.704 [2024-12-06 06:49:52.195584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:33.704 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.704 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.704 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:23:33.704 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.704 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:33.704 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.704 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:33.704 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:23:33.704 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:23:33.704 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:33.704 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:33.704 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:33.704 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:33.704 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:33.704 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:33.704 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:23:33.704 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:33.704 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:33.704 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:33.963 /dev/nbd0 00:23:33.964 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:33.964 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:33.964 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:33.964 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:23:33.964 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:33.964 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:33.964 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:33.964 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:23:33.964 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:33.964 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:33.964 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:33.964 1+0 records in 00:23:33.964 1+0 records out 00:23:33.964 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284887 s, 14.4 MB/s 00:23:33.964 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:33.964 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:23:33.964 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:33.964 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:33.964 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:23:33.964 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:33.964 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:33.964 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:23:34.222 /dev/nbd1 00:23:34.479 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:34.479 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:34.479 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:34.479 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:23:34.479 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:34.479 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:34.479 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:34.479 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:23:34.479 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:34.479 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:34.479 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:34.479 1+0 records in 00:23:34.480 1+0 records out 00:23:34.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340402 s, 12.0 MB/s 00:23:34.480 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:34.480 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:23:34.480 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:34.480 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:34.480 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:23:34.480 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:34.480 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:34.480 06:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:34.480 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:23:34.480 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:34.480 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:34.480 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:34.480 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:23:34.480 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:34.480 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:35.046 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:35.046 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:35.046 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:35.046 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:35.046 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:35.046 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:35.046 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:23:35.046 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:23:35.046 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:35.046 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:35.306 [2024-12-06 06:49:53.809851] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:35.306 [2024-12-06 06:49:53.809937] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:35.306 [2024-12-06 06:49:53.809971] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:35.306 [2024-12-06 06:49:53.809988] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:35.306 [2024-12-06 06:49:53.812619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:35.306 [2024-12-06 06:49:53.812669] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:35.306 [2024-12-06 06:49:53.812751] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:35.306 [2024-12-06 06:49:53.812815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:35.306 [2024-12-06 06:49:53.812994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:35.306 spare 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:35.306 [2024-12-06 06:49:53.913104] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:23:35.306 [2024-12-06 06:49:53.913159] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:35.306 [2024-12-06 06:49:53.913280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:23:35.306 [2024-12-06 06:49:53.913469] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:23:35.306 [2024-12-06 06:49:53.913501] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:23:35.306 [2024-12-06 06:49:53.913668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.306 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.594 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:35.594 "name": "raid_bdev1", 00:23:35.594 "uuid": "464c975c-d277-4db3-a0ce-b13205b42e1f", 00:23:35.594 "strip_size_kb": 0, 00:23:35.594 "state": "online", 00:23:35.594 "raid_level": "raid1", 00:23:35.594 "superblock": true, 00:23:35.594 "num_base_bdevs": 2, 00:23:35.594 "num_base_bdevs_discovered": 2, 00:23:35.594 "num_base_bdevs_operational": 2, 00:23:35.594 "base_bdevs_list": [ 00:23:35.594 { 00:23:35.594 "name": "spare", 00:23:35.594 "uuid": "138c3ec6-f0cf-5807-99d6-8ac66a5539cc", 00:23:35.594 "is_configured": true, 00:23:35.594 "data_offset": 256, 00:23:35.594 "data_size": 7936 00:23:35.594 }, 00:23:35.594 { 00:23:35.594 "name": "BaseBdev2", 00:23:35.594 "uuid": "9e831c17-abd6-57cd-b811-7927e847cc22", 00:23:35.594 "is_configured": true, 00:23:35.594 "data_offset": 256, 00:23:35.594 "data_size": 7936 00:23:35.594 } 00:23:35.594 ] 00:23:35.594 }' 00:23:35.594 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:35.594 06:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:35.853 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:35.853 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:35.853 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:35.853 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:35.853 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:35.853 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.853 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.853 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.853 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:35.853 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.853 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:35.853 "name": "raid_bdev1", 00:23:35.853 "uuid": "464c975c-d277-4db3-a0ce-b13205b42e1f", 00:23:35.853 "strip_size_kb": 0, 00:23:35.853 "state": "online", 00:23:35.853 "raid_level": "raid1", 00:23:35.853 "superblock": true, 00:23:35.853 "num_base_bdevs": 2, 00:23:35.853 "num_base_bdevs_discovered": 2, 00:23:35.853 "num_base_bdevs_operational": 2, 00:23:35.853 "base_bdevs_list": [ 00:23:35.853 { 00:23:35.853 "name": "spare", 00:23:35.853 "uuid": "138c3ec6-f0cf-5807-99d6-8ac66a5539cc", 00:23:35.853 "is_configured": true, 00:23:35.853 "data_offset": 256, 00:23:35.853 "data_size": 7936 00:23:35.853 }, 00:23:35.853 { 00:23:35.853 "name": "BaseBdev2", 00:23:35.853 "uuid": "9e831c17-abd6-57cd-b811-7927e847cc22", 00:23:35.853 "is_configured": true, 00:23:35.853 "data_offset": 256, 00:23:35.853 "data_size": 7936 00:23:35.853 } 00:23:35.853 ] 00:23:35.853 }' 00:23:35.853 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:36.132 [2024-12-06 06:49:54.650154] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:36.132 "name": "raid_bdev1", 00:23:36.132 "uuid": "464c975c-d277-4db3-a0ce-b13205b42e1f", 00:23:36.132 "strip_size_kb": 0, 00:23:36.132 "state": "online", 00:23:36.132 "raid_level": "raid1", 00:23:36.132 "superblock": true, 00:23:36.132 "num_base_bdevs": 2, 00:23:36.132 "num_base_bdevs_discovered": 1, 00:23:36.132 "num_base_bdevs_operational": 1, 00:23:36.132 "base_bdevs_list": [ 00:23:36.132 { 00:23:36.132 "name": null, 00:23:36.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.132 "is_configured": false, 00:23:36.132 "data_offset": 0, 00:23:36.132 "data_size": 7936 00:23:36.132 }, 00:23:36.132 { 00:23:36.132 "name": "BaseBdev2", 00:23:36.132 "uuid": "9e831c17-abd6-57cd-b811-7927e847cc22", 00:23:36.132 "is_configured": true, 00:23:36.132 "data_offset": 256, 00:23:36.132 "data_size": 7936 00:23:36.132 } 00:23:36.132 ] 00:23:36.132 }' 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:36.132 06:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:36.700 06:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:36.700 06:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.700 06:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:36.700 [2024-12-06 06:49:55.186339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:36.700 [2024-12-06 06:49:55.186626] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:36.700 [2024-12-06 06:49:55.186661] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:36.700 [2024-12-06 06:49:55.186720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:36.700 [2024-12-06 06:49:55.199348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:23:36.700 06:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.700 06:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:23:36.700 [2024-12-06 06:49:55.201817] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:37.634 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:37.634 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:37.634 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:37.634 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:37.634 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:37.634 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:37.634 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.634 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.634 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:37.634 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.634 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:37.634 "name": "raid_bdev1", 00:23:37.634 "uuid": "464c975c-d277-4db3-a0ce-b13205b42e1f", 00:23:37.634 "strip_size_kb": 0, 00:23:37.634 "state": "online", 00:23:37.634 "raid_level": "raid1", 00:23:37.634 "superblock": true, 00:23:37.634 "num_base_bdevs": 2, 00:23:37.634 "num_base_bdevs_discovered": 2, 00:23:37.634 "num_base_bdevs_operational": 2, 00:23:37.634 "process": { 00:23:37.634 "type": "rebuild", 00:23:37.634 "target": "spare", 00:23:37.634 "progress": { 00:23:37.634 "blocks": 2560, 00:23:37.634 "percent": 32 00:23:37.634 } 00:23:37.634 }, 00:23:37.634 "base_bdevs_list": [ 00:23:37.634 { 00:23:37.634 "name": "spare", 00:23:37.634 "uuid": "138c3ec6-f0cf-5807-99d6-8ac66a5539cc", 00:23:37.634 "is_configured": true, 00:23:37.634 "data_offset": 256, 00:23:37.634 "data_size": 7936 00:23:37.634 }, 00:23:37.634 { 00:23:37.634 "name": "BaseBdev2", 00:23:37.634 "uuid": "9e831c17-abd6-57cd-b811-7927e847cc22", 00:23:37.634 "is_configured": true, 00:23:37.634 "data_offset": 256, 00:23:37.634 "data_size": 7936 00:23:37.634 } 00:23:37.634 ] 00:23:37.634 }' 00:23:37.634 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:37.892 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:37.892 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:37.892 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:37.892 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:23:37.892 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.893 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:37.893 [2024-12-06 06:49:56.363477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:37.893 [2024-12-06 06:49:56.410754] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:37.893 [2024-12-06 06:49:56.410838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:37.893 [2024-12-06 06:49:56.410861] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:37.893 [2024-12-06 06:49:56.410888] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:37.893 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.893 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:37.893 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:37.893 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:37.893 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:37.893 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:37.893 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:37.893 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:37.893 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:37.893 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:37.893 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:37.893 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:37.893 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.893 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.893 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:37.893 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.893 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:37.893 "name": "raid_bdev1", 00:23:37.893 "uuid": "464c975c-d277-4db3-a0ce-b13205b42e1f", 00:23:37.893 "strip_size_kb": 0, 00:23:37.893 "state": "online", 00:23:37.893 "raid_level": "raid1", 00:23:37.893 "superblock": true, 00:23:37.893 "num_base_bdevs": 2, 00:23:37.893 "num_base_bdevs_discovered": 1, 00:23:37.893 "num_base_bdevs_operational": 1, 00:23:37.893 "base_bdevs_list": [ 00:23:37.893 { 00:23:37.893 "name": null, 00:23:37.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.893 "is_configured": false, 00:23:37.893 "data_offset": 0, 00:23:37.893 "data_size": 7936 00:23:37.893 }, 00:23:37.893 { 00:23:37.893 "name": "BaseBdev2", 00:23:37.893 "uuid": "9e831c17-abd6-57cd-b811-7927e847cc22", 00:23:37.893 "is_configured": true, 00:23:37.893 "data_offset": 256, 00:23:37.893 "data_size": 7936 00:23:37.893 } 00:23:37.893 ] 00:23:37.893 }' 00:23:37.893 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:37.893 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:38.460 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:38.460 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.460 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:38.460 [2024-12-06 06:49:56.957228] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:38.460 [2024-12-06 06:49:56.957323] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:38.460 [2024-12-06 06:49:56.957360] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:38.460 [2024-12-06 06:49:56.957379] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:38.460 [2024-12-06 06:49:56.957715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:38.460 [2024-12-06 06:49:56.957756] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:38.460 [2024-12-06 06:49:56.957837] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:38.460 [2024-12-06 06:49:56.957861] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:38.460 [2024-12-06 06:49:56.957874] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:38.460 [2024-12-06 06:49:56.957906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:38.460 [2024-12-06 06:49:56.970632] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:23:38.460 spare 00:23:38.460 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.461 06:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:23:38.461 [2024-12-06 06:49:56.973092] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:39.397 06:49:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:39.397 06:49:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:39.397 06:49:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:39.397 06:49:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:39.397 06:49:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:39.397 06:49:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.397 06:49:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.397 06:49:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.397 06:49:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:39.397 06:49:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.397 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:39.397 "name": "raid_bdev1", 00:23:39.397 "uuid": "464c975c-d277-4db3-a0ce-b13205b42e1f", 00:23:39.397 "strip_size_kb": 0, 00:23:39.397 "state": "online", 00:23:39.397 "raid_level": "raid1", 00:23:39.397 "superblock": true, 00:23:39.397 "num_base_bdevs": 2, 00:23:39.397 "num_base_bdevs_discovered": 2, 00:23:39.397 "num_base_bdevs_operational": 2, 00:23:39.397 "process": { 00:23:39.397 "type": "rebuild", 00:23:39.397 "target": "spare", 00:23:39.397 "progress": { 00:23:39.397 "blocks": 2560, 00:23:39.397 "percent": 32 00:23:39.397 } 00:23:39.397 }, 00:23:39.397 "base_bdevs_list": [ 00:23:39.397 { 00:23:39.397 "name": "spare", 00:23:39.397 "uuid": "138c3ec6-f0cf-5807-99d6-8ac66a5539cc", 00:23:39.397 "is_configured": true, 00:23:39.397 "data_offset": 256, 00:23:39.397 "data_size": 7936 00:23:39.397 }, 00:23:39.397 { 00:23:39.397 "name": "BaseBdev2", 00:23:39.397 "uuid": "9e831c17-abd6-57cd-b811-7927e847cc22", 00:23:39.397 "is_configured": true, 00:23:39.397 "data_offset": 256, 00:23:39.397 "data_size": 7936 00:23:39.397 } 00:23:39.397 ] 00:23:39.397 }' 00:23:39.397 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:39.655 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:39.655 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:39.655 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:39.655 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:23:39.655 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.655 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:39.655 [2024-12-06 06:49:58.135439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:39.655 [2024-12-06 06:49:58.182112] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:39.655 [2024-12-06 06:49:58.182224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:39.655 [2024-12-06 06:49:58.182254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:39.655 [2024-12-06 06:49:58.182266] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:39.655 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.655 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:39.655 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:39.655 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:39.655 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:39.656 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:39.656 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:39.656 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:39.656 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:39.656 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:39.656 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:39.656 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.656 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.656 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:39.656 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.656 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.656 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:39.656 "name": "raid_bdev1", 00:23:39.656 "uuid": "464c975c-d277-4db3-a0ce-b13205b42e1f", 00:23:39.656 "strip_size_kb": 0, 00:23:39.656 "state": "online", 00:23:39.656 "raid_level": "raid1", 00:23:39.656 "superblock": true, 00:23:39.656 "num_base_bdevs": 2, 00:23:39.656 "num_base_bdevs_discovered": 1, 00:23:39.656 "num_base_bdevs_operational": 1, 00:23:39.656 "base_bdevs_list": [ 00:23:39.656 { 00:23:39.656 "name": null, 00:23:39.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:39.656 "is_configured": false, 00:23:39.656 "data_offset": 0, 00:23:39.656 "data_size": 7936 00:23:39.656 }, 00:23:39.656 { 00:23:39.656 "name": "BaseBdev2", 00:23:39.656 "uuid": "9e831c17-abd6-57cd-b811-7927e847cc22", 00:23:39.656 "is_configured": true, 00:23:39.656 "data_offset": 256, 00:23:39.656 "data_size": 7936 00:23:39.656 } 00:23:39.656 ] 00:23:39.656 }' 00:23:39.656 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:39.656 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:40.222 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:40.222 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:40.222 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:40.222 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:40.222 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:40.222 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:40.222 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.222 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:40.222 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.223 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.223 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:40.223 "name": "raid_bdev1", 00:23:40.223 "uuid": "464c975c-d277-4db3-a0ce-b13205b42e1f", 00:23:40.223 "strip_size_kb": 0, 00:23:40.223 "state": "online", 00:23:40.223 "raid_level": "raid1", 00:23:40.223 "superblock": true, 00:23:40.223 "num_base_bdevs": 2, 00:23:40.223 "num_base_bdevs_discovered": 1, 00:23:40.223 "num_base_bdevs_operational": 1, 00:23:40.223 "base_bdevs_list": [ 00:23:40.223 { 00:23:40.223 "name": null, 00:23:40.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.223 "is_configured": false, 00:23:40.223 "data_offset": 0, 00:23:40.223 "data_size": 7936 00:23:40.223 }, 00:23:40.223 { 00:23:40.223 "name": "BaseBdev2", 00:23:40.223 "uuid": "9e831c17-abd6-57cd-b811-7927e847cc22", 00:23:40.223 "is_configured": true, 00:23:40.223 "data_offset": 256, 00:23:40.223 "data_size": 7936 00:23:40.223 } 00:23:40.223 ] 00:23:40.223 }' 00:23:40.223 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:40.223 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:40.223 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:40.223 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:40.223 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:23:40.223 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.223 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:40.223 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.223 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:40.223 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.223 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:40.498 [2024-12-06 06:49:58.868685] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:40.498 [2024-12-06 06:49:58.868755] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:40.498 [2024-12-06 06:49:58.868789] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:40.498 [2024-12-06 06:49:58.868804] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:40.498 [2024-12-06 06:49:58.869081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:40.498 [2024-12-06 06:49:58.869113] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:40.498 [2024-12-06 06:49:58.869190] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:40.498 [2024-12-06 06:49:58.869209] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:40.498 [2024-12-06 06:49:58.869223] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:40.498 [2024-12-06 06:49:58.869237] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:23:40.498 BaseBdev1 00:23:40.498 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.498 06:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:23:41.449 06:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:41.449 06:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:41.449 06:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:41.449 06:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:41.449 06:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:41.449 06:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:41.449 06:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:41.449 06:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:41.449 06:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:41.449 06:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:41.449 06:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:41.449 06:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.449 06:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:41.449 06:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.449 06:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.449 06:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:41.449 "name": "raid_bdev1", 00:23:41.449 "uuid": "464c975c-d277-4db3-a0ce-b13205b42e1f", 00:23:41.449 "strip_size_kb": 0, 00:23:41.449 "state": "online", 00:23:41.449 "raid_level": "raid1", 00:23:41.449 "superblock": true, 00:23:41.449 "num_base_bdevs": 2, 00:23:41.449 "num_base_bdevs_discovered": 1, 00:23:41.449 "num_base_bdevs_operational": 1, 00:23:41.449 "base_bdevs_list": [ 00:23:41.449 { 00:23:41.449 "name": null, 00:23:41.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.449 "is_configured": false, 00:23:41.449 "data_offset": 0, 00:23:41.449 "data_size": 7936 00:23:41.449 }, 00:23:41.449 { 00:23:41.449 "name": "BaseBdev2", 00:23:41.449 "uuid": "9e831c17-abd6-57cd-b811-7927e847cc22", 00:23:41.449 "is_configured": true, 00:23:41.449 "data_offset": 256, 00:23:41.449 "data_size": 7936 00:23:41.449 } 00:23:41.449 ] 00:23:41.449 }' 00:23:41.450 06:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:41.450 06:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:42.017 "name": "raid_bdev1", 00:23:42.017 "uuid": "464c975c-d277-4db3-a0ce-b13205b42e1f", 00:23:42.017 "strip_size_kb": 0, 00:23:42.017 "state": "online", 00:23:42.017 "raid_level": "raid1", 00:23:42.017 "superblock": true, 00:23:42.017 "num_base_bdevs": 2, 00:23:42.017 "num_base_bdevs_discovered": 1, 00:23:42.017 "num_base_bdevs_operational": 1, 00:23:42.017 "base_bdevs_list": [ 00:23:42.017 { 00:23:42.017 "name": null, 00:23:42.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.017 "is_configured": false, 00:23:42.017 "data_offset": 0, 00:23:42.017 "data_size": 7936 00:23:42.017 }, 00:23:42.017 { 00:23:42.017 "name": "BaseBdev2", 00:23:42.017 "uuid": "9e831c17-abd6-57cd-b811-7927e847cc22", 00:23:42.017 "is_configured": true, 00:23:42.017 "data_offset": 256, 00:23:42.017 "data_size": 7936 00:23:42.017 } 00:23:42.017 ] 00:23:42.017 }' 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:42.017 [2024-12-06 06:50:00.561263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:42.017 [2024-12-06 06:50:00.561495] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:42.017 [2024-12-06 06:50:00.561544] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:42.017 request: 00:23:42.017 { 00:23:42.017 "base_bdev": "BaseBdev1", 00:23:42.017 "raid_bdev": "raid_bdev1", 00:23:42.017 "method": "bdev_raid_add_base_bdev", 00:23:42.017 "req_id": 1 00:23:42.017 } 00:23:42.017 Got JSON-RPC error response 00:23:42.017 response: 00:23:42.017 { 00:23:42.017 "code": -22, 00:23:42.017 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:42.017 } 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:42.017 06:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:23:42.953 06:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:42.953 06:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:42.953 06:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:42.953 06:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:42.953 06:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:42.953 06:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:42.953 06:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:42.953 06:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:42.953 06:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:42.953 06:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:42.953 06:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:42.953 06:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.953 06:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.953 06:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:42.953 06:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.211 06:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:43.211 "name": "raid_bdev1", 00:23:43.211 "uuid": "464c975c-d277-4db3-a0ce-b13205b42e1f", 00:23:43.212 "strip_size_kb": 0, 00:23:43.212 "state": "online", 00:23:43.212 "raid_level": "raid1", 00:23:43.212 "superblock": true, 00:23:43.212 "num_base_bdevs": 2, 00:23:43.212 "num_base_bdevs_discovered": 1, 00:23:43.212 "num_base_bdevs_operational": 1, 00:23:43.212 "base_bdevs_list": [ 00:23:43.212 { 00:23:43.212 "name": null, 00:23:43.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.212 "is_configured": false, 00:23:43.212 "data_offset": 0, 00:23:43.212 "data_size": 7936 00:23:43.212 }, 00:23:43.212 { 00:23:43.212 "name": "BaseBdev2", 00:23:43.212 "uuid": "9e831c17-abd6-57cd-b811-7927e847cc22", 00:23:43.212 "is_configured": true, 00:23:43.212 "data_offset": 256, 00:23:43.212 "data_size": 7936 00:23:43.212 } 00:23:43.212 ] 00:23:43.212 }' 00:23:43.212 06:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:43.212 06:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:43.469 06:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:43.727 06:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:43.727 06:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:43.727 06:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:43.727 06:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:43.727 06:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.728 06:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.728 06:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.728 06:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:43.728 06:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.728 06:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:43.728 "name": "raid_bdev1", 00:23:43.728 "uuid": "464c975c-d277-4db3-a0ce-b13205b42e1f", 00:23:43.728 "strip_size_kb": 0, 00:23:43.728 "state": "online", 00:23:43.728 "raid_level": "raid1", 00:23:43.728 "superblock": true, 00:23:43.728 "num_base_bdevs": 2, 00:23:43.728 "num_base_bdevs_discovered": 1, 00:23:43.728 "num_base_bdevs_operational": 1, 00:23:43.728 "base_bdevs_list": [ 00:23:43.728 { 00:23:43.728 "name": null, 00:23:43.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.728 "is_configured": false, 00:23:43.728 "data_offset": 0, 00:23:43.728 "data_size": 7936 00:23:43.728 }, 00:23:43.728 { 00:23:43.728 "name": "BaseBdev2", 00:23:43.728 "uuid": "9e831c17-abd6-57cd-b811-7927e847cc22", 00:23:43.728 "is_configured": true, 00:23:43.728 "data_offset": 256, 00:23:43.728 "data_size": 7936 00:23:43.728 } 00:23:43.728 ] 00:23:43.728 }' 00:23:43.728 06:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:43.728 06:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:43.728 06:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:43.728 06:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:43.728 06:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88398 00:23:43.728 06:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88398 ']' 00:23:43.728 06:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88398 00:23:43.728 06:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:23:43.728 06:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:43.728 06:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88398 00:23:43.728 06:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:43.728 killing process with pid 88398 00:23:43.728 06:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:43.728 06:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88398' 00:23:43.728 Received shutdown signal, test time was about 60.000000 seconds 00:23:43.728 00:23:43.728 Latency(us) 00:23:43.728 [2024-12-06T06:50:02.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.728 [2024-12-06T06:50:02.375Z] =================================================================================================================== 00:23:43.728 [2024-12-06T06:50:02.375Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:43.728 06:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88398 00:23:43.728 [2024-12-06 06:50:02.314421] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:43.728 06:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88398 00:23:43.728 [2024-12-06 06:50:02.314603] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:43.728 [2024-12-06 06:50:02.314681] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:43.728 [2024-12-06 06:50:02.314702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:23:43.986 [2024-12-06 06:50:02.606719] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:45.360 06:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:23:45.360 00:23:45.360 real 0m21.886s 00:23:45.360 user 0m29.684s 00:23:45.360 sys 0m2.690s 00:23:45.360 06:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:45.360 06:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:45.360 ************************************ 00:23:45.360 END TEST raid_rebuild_test_sb_md_separate 00:23:45.360 ************************************ 00:23:45.360 06:50:03 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:23:45.360 06:50:03 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:23:45.360 06:50:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:45.360 06:50:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:45.360 06:50:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:45.360 ************************************ 00:23:45.360 START TEST raid_state_function_test_sb_md_interleaved 00:23:45.360 ************************************ 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=89109 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:45.360 Process raid pid: 89109 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 89109' 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 89109 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89109 ']' 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:45.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:45.360 06:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:45.360 [2024-12-06 06:50:03.823837] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:23:45.360 [2024-12-06 06:50:03.824009] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:45.360 [2024-12-06 06:50:04.000401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.618 [2024-12-06 06:50:04.138196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.876 [2024-12-06 06:50:04.351475] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:45.876 [2024-12-06 06:50:04.351520] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:46.441 06:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:46.441 06:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:23:46.441 06:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:46.441 06:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.441 06:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.441 [2024-12-06 06:50:04.796697] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:46.441 [2024-12-06 06:50:04.796765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:46.441 [2024-12-06 06:50:04.796782] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:46.441 [2024-12-06 06:50:04.796798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:46.441 06:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.441 06:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:46.441 06:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:46.441 06:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:46.441 06:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:46.441 06:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:46.441 06:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:46.441 06:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:46.441 06:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:46.441 06:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:46.441 06:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:46.441 06:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:46.441 06:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.441 06:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.441 06:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.441 06:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.441 06:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:46.441 "name": "Existed_Raid", 00:23:46.441 "uuid": "ea1fec36-177f-42d1-9556-974393757bb2", 00:23:46.441 "strip_size_kb": 0, 00:23:46.441 "state": "configuring", 00:23:46.441 "raid_level": "raid1", 00:23:46.441 "superblock": true, 00:23:46.441 "num_base_bdevs": 2, 00:23:46.441 "num_base_bdevs_discovered": 0, 00:23:46.441 "num_base_bdevs_operational": 2, 00:23:46.441 "base_bdevs_list": [ 00:23:46.441 { 00:23:46.441 "name": "BaseBdev1", 00:23:46.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.441 "is_configured": false, 00:23:46.441 "data_offset": 0, 00:23:46.441 "data_size": 0 00:23:46.441 }, 00:23:46.441 { 00:23:46.441 "name": "BaseBdev2", 00:23:46.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.441 "is_configured": false, 00:23:46.441 "data_offset": 0, 00:23:46.441 "data_size": 0 00:23:46.441 } 00:23:46.441 ] 00:23:46.441 }' 00:23:46.441 06:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:46.441 06:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.699 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:46.699 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.699 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.699 [2024-12-06 06:50:05.316796] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:46.699 [2024-12-06 06:50:05.316843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:46.699 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.699 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:46.699 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.699 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.699 [2024-12-06 06:50:05.324781] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:46.699 [2024-12-06 06:50:05.324836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:46.699 [2024-12-06 06:50:05.324851] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:46.699 [2024-12-06 06:50:05.324873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:46.699 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.699 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:23:46.699 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.699 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.956 [2024-12-06 06:50:05.369542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:46.956 BaseBdev1 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.956 [ 00:23:46.956 { 00:23:46.956 "name": "BaseBdev1", 00:23:46.956 "aliases": [ 00:23:46.956 "2664936c-c554-40b0-a508-7daf29e0db9f" 00:23:46.956 ], 00:23:46.956 "product_name": "Malloc disk", 00:23:46.956 "block_size": 4128, 00:23:46.956 "num_blocks": 8192, 00:23:46.956 "uuid": "2664936c-c554-40b0-a508-7daf29e0db9f", 00:23:46.956 "md_size": 32, 00:23:46.956 "md_interleave": true, 00:23:46.956 "dif_type": 0, 00:23:46.956 "assigned_rate_limits": { 00:23:46.956 "rw_ios_per_sec": 0, 00:23:46.956 "rw_mbytes_per_sec": 0, 00:23:46.956 "r_mbytes_per_sec": 0, 00:23:46.956 "w_mbytes_per_sec": 0 00:23:46.956 }, 00:23:46.956 "claimed": true, 00:23:46.956 "claim_type": "exclusive_write", 00:23:46.956 "zoned": false, 00:23:46.956 "supported_io_types": { 00:23:46.956 "read": true, 00:23:46.956 "write": true, 00:23:46.956 "unmap": true, 00:23:46.956 "flush": true, 00:23:46.956 "reset": true, 00:23:46.956 "nvme_admin": false, 00:23:46.956 "nvme_io": false, 00:23:46.956 "nvme_io_md": false, 00:23:46.956 "write_zeroes": true, 00:23:46.956 "zcopy": true, 00:23:46.956 "get_zone_info": false, 00:23:46.956 "zone_management": false, 00:23:46.956 "zone_append": false, 00:23:46.956 "compare": false, 00:23:46.956 "compare_and_write": false, 00:23:46.956 "abort": true, 00:23:46.956 "seek_hole": false, 00:23:46.956 "seek_data": false, 00:23:46.956 "copy": true, 00:23:46.956 "nvme_iov_md": false 00:23:46.956 }, 00:23:46.956 "memory_domains": [ 00:23:46.956 { 00:23:46.956 "dma_device_id": "system", 00:23:46.956 "dma_device_type": 1 00:23:46.956 }, 00:23:46.956 { 00:23:46.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:46.956 "dma_device_type": 2 00:23:46.956 } 00:23:46.956 ], 00:23:46.956 "driver_specific": {} 00:23:46.956 } 00:23:46.956 ] 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.956 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:46.956 "name": "Existed_Raid", 00:23:46.956 "uuid": "7a462038-8b6e-48e6-b24b-0753204d19be", 00:23:46.956 "strip_size_kb": 0, 00:23:46.956 "state": "configuring", 00:23:46.956 "raid_level": "raid1", 00:23:46.956 "superblock": true, 00:23:46.956 "num_base_bdevs": 2, 00:23:46.956 "num_base_bdevs_discovered": 1, 00:23:46.956 "num_base_bdevs_operational": 2, 00:23:46.956 "base_bdevs_list": [ 00:23:46.956 { 00:23:46.956 "name": "BaseBdev1", 00:23:46.956 "uuid": "2664936c-c554-40b0-a508-7daf29e0db9f", 00:23:46.956 "is_configured": true, 00:23:46.956 "data_offset": 256, 00:23:46.957 "data_size": 7936 00:23:46.957 }, 00:23:46.957 { 00:23:46.957 "name": "BaseBdev2", 00:23:46.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.957 "is_configured": false, 00:23:46.957 "data_offset": 0, 00:23:46.957 "data_size": 0 00:23:46.957 } 00:23:46.957 ] 00:23:46.957 }' 00:23:46.957 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:46.957 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:47.520 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:47.520 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.520 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:47.520 [2024-12-06 06:50:05.917772] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:47.520 [2024-12-06 06:50:05.917838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:47.520 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.520 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:47.520 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.520 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:47.520 [2024-12-06 06:50:05.925808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:47.520 [2024-12-06 06:50:05.928202] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:47.520 [2024-12-06 06:50:05.928258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:47.520 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.520 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:47.520 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:47.520 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:47.520 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:47.520 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:47.520 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:47.520 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:47.520 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:47.520 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:47.520 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:47.520 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:47.520 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:47.520 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:47.520 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.520 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:47.520 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:47.520 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.520 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:47.520 "name": "Existed_Raid", 00:23:47.520 "uuid": "46e96a76-006e-4471-b34b-6df966c489c0", 00:23:47.520 "strip_size_kb": 0, 00:23:47.520 "state": "configuring", 00:23:47.520 "raid_level": "raid1", 00:23:47.520 "superblock": true, 00:23:47.520 "num_base_bdevs": 2, 00:23:47.520 "num_base_bdevs_discovered": 1, 00:23:47.520 "num_base_bdevs_operational": 2, 00:23:47.520 "base_bdevs_list": [ 00:23:47.520 { 00:23:47.520 "name": "BaseBdev1", 00:23:47.520 "uuid": "2664936c-c554-40b0-a508-7daf29e0db9f", 00:23:47.520 "is_configured": true, 00:23:47.520 "data_offset": 256, 00:23:47.520 "data_size": 7936 00:23:47.520 }, 00:23:47.520 { 00:23:47.520 "name": "BaseBdev2", 00:23:47.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.520 "is_configured": false, 00:23:47.520 "data_offset": 0, 00:23:47.520 "data_size": 0 00:23:47.520 } 00:23:47.520 ] 00:23:47.520 }' 00:23:47.520 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:47.521 06:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:48.085 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:23:48.085 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.085 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:48.085 [2024-12-06 06:50:06.476063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:48.085 [2024-12-06 06:50:06.476388] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:48.085 [2024-12-06 06:50:06.476407] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:48.085 [2024-12-06 06:50:06.476520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:48.085 [2024-12-06 06:50:06.476647] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:48.085 [2024-12-06 06:50:06.476665] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:48.085 [2024-12-06 06:50:06.476748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:48.085 BaseBdev2 00:23:48.085 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.085 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:48.085 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:48.085 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:48.085 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:23:48.085 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:48.085 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:48.085 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:48.085 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.085 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:48.085 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.085 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:48.085 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.086 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:48.086 [ 00:23:48.086 { 00:23:48.086 "name": "BaseBdev2", 00:23:48.086 "aliases": [ 00:23:48.086 "e7c0977f-43d1-4a24-8750-2662e1faf5de" 00:23:48.086 ], 00:23:48.086 "product_name": "Malloc disk", 00:23:48.086 "block_size": 4128, 00:23:48.086 "num_blocks": 8192, 00:23:48.086 "uuid": "e7c0977f-43d1-4a24-8750-2662e1faf5de", 00:23:48.086 "md_size": 32, 00:23:48.086 "md_interleave": true, 00:23:48.086 "dif_type": 0, 00:23:48.086 "assigned_rate_limits": { 00:23:48.086 "rw_ios_per_sec": 0, 00:23:48.086 "rw_mbytes_per_sec": 0, 00:23:48.086 "r_mbytes_per_sec": 0, 00:23:48.086 "w_mbytes_per_sec": 0 00:23:48.086 }, 00:23:48.086 "claimed": true, 00:23:48.086 "claim_type": "exclusive_write", 00:23:48.086 "zoned": false, 00:23:48.086 "supported_io_types": { 00:23:48.086 "read": true, 00:23:48.086 "write": true, 00:23:48.086 "unmap": true, 00:23:48.086 "flush": true, 00:23:48.086 "reset": true, 00:23:48.086 "nvme_admin": false, 00:23:48.086 "nvme_io": false, 00:23:48.086 "nvme_io_md": false, 00:23:48.086 "write_zeroes": true, 00:23:48.086 "zcopy": true, 00:23:48.086 "get_zone_info": false, 00:23:48.086 "zone_management": false, 00:23:48.086 "zone_append": false, 00:23:48.086 "compare": false, 00:23:48.086 "compare_and_write": false, 00:23:48.086 "abort": true, 00:23:48.086 "seek_hole": false, 00:23:48.086 "seek_data": false, 00:23:48.086 "copy": true, 00:23:48.086 "nvme_iov_md": false 00:23:48.086 }, 00:23:48.086 "memory_domains": [ 00:23:48.086 { 00:23:48.086 "dma_device_id": "system", 00:23:48.086 "dma_device_type": 1 00:23:48.086 }, 00:23:48.086 { 00:23:48.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:48.086 "dma_device_type": 2 00:23:48.086 } 00:23:48.086 ], 00:23:48.086 "driver_specific": {} 00:23:48.086 } 00:23:48.086 ] 00:23:48.086 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.086 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:23:48.086 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:48.086 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:48.086 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:23:48.086 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:48.086 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:48.086 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:48.086 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:48.086 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:48.086 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:48.086 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:48.086 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:48.086 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:48.086 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:48.086 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:48.086 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.086 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:48.086 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.086 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:48.086 "name": "Existed_Raid", 00:23:48.086 "uuid": "46e96a76-006e-4471-b34b-6df966c489c0", 00:23:48.086 "strip_size_kb": 0, 00:23:48.086 "state": "online", 00:23:48.086 "raid_level": "raid1", 00:23:48.086 "superblock": true, 00:23:48.086 "num_base_bdevs": 2, 00:23:48.086 "num_base_bdevs_discovered": 2, 00:23:48.086 "num_base_bdevs_operational": 2, 00:23:48.086 "base_bdevs_list": [ 00:23:48.086 { 00:23:48.086 "name": "BaseBdev1", 00:23:48.086 "uuid": "2664936c-c554-40b0-a508-7daf29e0db9f", 00:23:48.086 "is_configured": true, 00:23:48.086 "data_offset": 256, 00:23:48.086 "data_size": 7936 00:23:48.086 }, 00:23:48.086 { 00:23:48.086 "name": "BaseBdev2", 00:23:48.086 "uuid": "e7c0977f-43d1-4a24-8750-2662e1faf5de", 00:23:48.086 "is_configured": true, 00:23:48.086 "data_offset": 256, 00:23:48.086 "data_size": 7936 00:23:48.086 } 00:23:48.086 ] 00:23:48.086 }' 00:23:48.086 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:48.086 06:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:48.652 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:48.652 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:48.652 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:48.652 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:48.652 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:23:48.652 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:48.652 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:48.652 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.652 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:48.652 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:48.652 [2024-12-06 06:50:07.008667] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:48.652 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.652 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:48.652 "name": "Existed_Raid", 00:23:48.652 "aliases": [ 00:23:48.652 "46e96a76-006e-4471-b34b-6df966c489c0" 00:23:48.652 ], 00:23:48.652 "product_name": "Raid Volume", 00:23:48.652 "block_size": 4128, 00:23:48.652 "num_blocks": 7936, 00:23:48.652 "uuid": "46e96a76-006e-4471-b34b-6df966c489c0", 00:23:48.652 "md_size": 32, 00:23:48.652 "md_interleave": true, 00:23:48.652 "dif_type": 0, 00:23:48.652 "assigned_rate_limits": { 00:23:48.652 "rw_ios_per_sec": 0, 00:23:48.652 "rw_mbytes_per_sec": 0, 00:23:48.652 "r_mbytes_per_sec": 0, 00:23:48.652 "w_mbytes_per_sec": 0 00:23:48.652 }, 00:23:48.652 "claimed": false, 00:23:48.652 "zoned": false, 00:23:48.652 "supported_io_types": { 00:23:48.652 "read": true, 00:23:48.652 "write": true, 00:23:48.652 "unmap": false, 00:23:48.652 "flush": false, 00:23:48.652 "reset": true, 00:23:48.652 "nvme_admin": false, 00:23:48.652 "nvme_io": false, 00:23:48.652 "nvme_io_md": false, 00:23:48.652 "write_zeroes": true, 00:23:48.652 "zcopy": false, 00:23:48.652 "get_zone_info": false, 00:23:48.652 "zone_management": false, 00:23:48.652 "zone_append": false, 00:23:48.652 "compare": false, 00:23:48.652 "compare_and_write": false, 00:23:48.652 "abort": false, 00:23:48.652 "seek_hole": false, 00:23:48.652 "seek_data": false, 00:23:48.652 "copy": false, 00:23:48.652 "nvme_iov_md": false 00:23:48.652 }, 00:23:48.652 "memory_domains": [ 00:23:48.652 { 00:23:48.652 "dma_device_id": "system", 00:23:48.652 "dma_device_type": 1 00:23:48.652 }, 00:23:48.652 { 00:23:48.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:48.652 "dma_device_type": 2 00:23:48.652 }, 00:23:48.652 { 00:23:48.652 "dma_device_id": "system", 00:23:48.652 "dma_device_type": 1 00:23:48.652 }, 00:23:48.652 { 00:23:48.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:48.652 "dma_device_type": 2 00:23:48.652 } 00:23:48.652 ], 00:23:48.652 "driver_specific": { 00:23:48.652 "raid": { 00:23:48.652 "uuid": "46e96a76-006e-4471-b34b-6df966c489c0", 00:23:48.652 "strip_size_kb": 0, 00:23:48.652 "state": "online", 00:23:48.652 "raid_level": "raid1", 00:23:48.652 "superblock": true, 00:23:48.652 "num_base_bdevs": 2, 00:23:48.652 "num_base_bdevs_discovered": 2, 00:23:48.652 "num_base_bdevs_operational": 2, 00:23:48.652 "base_bdevs_list": [ 00:23:48.652 { 00:23:48.652 "name": "BaseBdev1", 00:23:48.652 "uuid": "2664936c-c554-40b0-a508-7daf29e0db9f", 00:23:48.652 "is_configured": true, 00:23:48.652 "data_offset": 256, 00:23:48.652 "data_size": 7936 00:23:48.652 }, 00:23:48.652 { 00:23:48.652 "name": "BaseBdev2", 00:23:48.652 "uuid": "e7c0977f-43d1-4a24-8750-2662e1faf5de", 00:23:48.652 "is_configured": true, 00:23:48.652 "data_offset": 256, 00:23:48.652 "data_size": 7936 00:23:48.652 } 00:23:48.652 ] 00:23:48.652 } 00:23:48.652 } 00:23:48.652 }' 00:23:48.652 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:48.653 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:48.653 BaseBdev2' 00:23:48.653 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:48.653 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:23:48.653 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:48.653 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:48.653 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.653 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:48.653 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:48.653 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.653 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:48.653 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:48.653 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:48.653 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:48.653 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.653 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:48.653 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:48.653 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.653 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:48.653 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:48.653 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:48.653 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.653 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:48.653 [2024-12-06 06:50:07.276450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:48.912 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.912 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:48.912 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:23:48.912 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:48.912 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:23:48.912 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:23:48.912 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:23:48.912 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:48.912 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:48.912 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:48.912 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:48.912 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:48.912 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:48.912 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:48.912 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:48.912 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:48.912 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:48.912 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.912 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:48.912 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:48.912 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.912 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:48.912 "name": "Existed_Raid", 00:23:48.912 "uuid": "46e96a76-006e-4471-b34b-6df966c489c0", 00:23:48.912 "strip_size_kb": 0, 00:23:48.912 "state": "online", 00:23:48.912 "raid_level": "raid1", 00:23:48.912 "superblock": true, 00:23:48.912 "num_base_bdevs": 2, 00:23:48.912 "num_base_bdevs_discovered": 1, 00:23:48.912 "num_base_bdevs_operational": 1, 00:23:48.912 "base_bdevs_list": [ 00:23:48.912 { 00:23:48.912 "name": null, 00:23:48.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:48.912 "is_configured": false, 00:23:48.912 "data_offset": 0, 00:23:48.912 "data_size": 7936 00:23:48.912 }, 00:23:48.912 { 00:23:48.912 "name": "BaseBdev2", 00:23:48.912 "uuid": "e7c0977f-43d1-4a24-8750-2662e1faf5de", 00:23:48.912 "is_configured": true, 00:23:48.912 "data_offset": 256, 00:23:48.912 "data_size": 7936 00:23:48.912 } 00:23:48.912 ] 00:23:48.912 }' 00:23:48.912 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:48.912 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:49.479 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:49.479 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:49.479 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:49.479 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:49.479 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.479 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:49.479 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.479 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:49.479 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:49.479 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:49.479 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.479 06:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:49.479 [2024-12-06 06:50:07.921855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:49.479 [2024-12-06 06:50:07.921995] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:49.479 [2024-12-06 06:50:08.007441] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:49.479 [2024-12-06 06:50:08.007505] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:49.479 [2024-12-06 06:50:08.007544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:49.479 06:50:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.479 06:50:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:49.479 06:50:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:49.479 06:50:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:49.479 06:50:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:49.479 06:50:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.479 06:50:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:49.479 06:50:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.479 06:50:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:49.479 06:50:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:49.479 06:50:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:23:49.479 06:50:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 89109 00:23:49.479 06:50:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89109 ']' 00:23:49.479 06:50:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89109 00:23:49.479 06:50:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:23:49.479 06:50:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:49.479 06:50:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89109 00:23:49.479 06:50:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:49.479 killing process with pid 89109 00:23:49.479 06:50:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:49.479 06:50:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89109' 00:23:49.479 06:50:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89109 00:23:49.479 [2024-12-06 06:50:08.092264] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:49.479 06:50:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89109 00:23:49.479 [2024-12-06 06:50:08.106849] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:50.854 06:50:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:23:50.854 00:23:50.854 real 0m5.449s 00:23:50.854 user 0m8.222s 00:23:50.854 sys 0m0.794s 00:23:50.854 06:50:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:50.854 06:50:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:50.854 ************************************ 00:23:50.854 END TEST raid_state_function_test_sb_md_interleaved 00:23:50.854 ************************************ 00:23:50.854 06:50:09 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:23:50.854 06:50:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:50.854 06:50:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:50.854 06:50:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:50.854 ************************************ 00:23:50.854 START TEST raid_superblock_test_md_interleaved 00:23:50.854 ************************************ 00:23:50.854 06:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:23:50.854 06:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:23:50.854 06:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:23:50.854 06:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:23:50.854 06:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:23:50.854 06:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:23:50.854 06:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:23:50.854 06:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:23:50.854 06:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:23:50.854 06:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:23:50.854 06:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:23:50.854 06:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:23:50.854 06:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:23:50.854 06:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:23:50.854 06:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:23:50.854 06:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:23:50.854 06:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89361 00:23:50.854 06:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89361 00:23:50.855 06:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:23:50.855 06:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89361 ']' 00:23:50.855 06:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.855 06:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:50.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.855 06:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.855 06:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:50.855 06:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:50.855 [2024-12-06 06:50:09.322787] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:23:50.855 [2024-12-06 06:50:09.322974] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89361 ] 00:23:51.113 [2024-12-06 06:50:09.509935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.113 [2024-12-06 06:50:09.664184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.372 [2024-12-06 06:50:09.866771] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:51.372 [2024-12-06 06:50:09.866816] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:51.939 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:51.939 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:23:51.939 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:23:51.939 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:51.939 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:23:51.939 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:23:51.939 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:51.939 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:51.939 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:51.939 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:51.939 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:23:51.939 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.939 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:51.939 malloc1 00:23:51.939 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.939 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:51.939 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.939 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:51.939 [2024-12-06 06:50:10.338380] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:51.939 [2024-12-06 06:50:10.338447] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:51.939 [2024-12-06 06:50:10.338480] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:51.939 [2024-12-06 06:50:10.338496] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:51.939 [2024-12-06 06:50:10.341010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:51.939 [2024-12-06 06:50:10.341053] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:51.939 pt1 00:23:51.939 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.939 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:51.939 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:51.939 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:23:51.939 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:23:51.939 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:51.939 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:51.939 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:51.939 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:51.940 malloc2 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:51.940 [2024-12-06 06:50:10.394030] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:51.940 [2024-12-06 06:50:10.394099] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:51.940 [2024-12-06 06:50:10.394131] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:51.940 [2024-12-06 06:50:10.394146] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:51.940 [2024-12-06 06:50:10.396544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:51.940 [2024-12-06 06:50:10.396585] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:51.940 pt2 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:51.940 [2024-12-06 06:50:10.402053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:51.940 [2024-12-06 06:50:10.404412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:51.940 [2024-12-06 06:50:10.404678] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:51.940 [2024-12-06 06:50:10.404707] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:51.940 [2024-12-06 06:50:10.404806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:51.940 [2024-12-06 06:50:10.404907] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:51.940 [2024-12-06 06:50:10.404937] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:51.940 [2024-12-06 06:50:10.405032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:51.940 "name": "raid_bdev1", 00:23:51.940 "uuid": "44ba4191-75b3-463f-8cfa-c03d58b0b853", 00:23:51.940 "strip_size_kb": 0, 00:23:51.940 "state": "online", 00:23:51.940 "raid_level": "raid1", 00:23:51.940 "superblock": true, 00:23:51.940 "num_base_bdevs": 2, 00:23:51.940 "num_base_bdevs_discovered": 2, 00:23:51.940 "num_base_bdevs_operational": 2, 00:23:51.940 "base_bdevs_list": [ 00:23:51.940 { 00:23:51.940 "name": "pt1", 00:23:51.940 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:51.940 "is_configured": true, 00:23:51.940 "data_offset": 256, 00:23:51.940 "data_size": 7936 00:23:51.940 }, 00:23:51.940 { 00:23:51.940 "name": "pt2", 00:23:51.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:51.940 "is_configured": true, 00:23:51.940 "data_offset": 256, 00:23:51.940 "data_size": 7936 00:23:51.940 } 00:23:51.940 ] 00:23:51.940 }' 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:51.940 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:52.507 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:23:52.507 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:52.507 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:52.507 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:52.507 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:23:52.507 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:52.507 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:52.507 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:52.507 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.507 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:52.507 [2024-12-06 06:50:10.942574] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:52.507 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.507 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:52.507 "name": "raid_bdev1", 00:23:52.507 "aliases": [ 00:23:52.507 "44ba4191-75b3-463f-8cfa-c03d58b0b853" 00:23:52.507 ], 00:23:52.507 "product_name": "Raid Volume", 00:23:52.507 "block_size": 4128, 00:23:52.507 "num_blocks": 7936, 00:23:52.507 "uuid": "44ba4191-75b3-463f-8cfa-c03d58b0b853", 00:23:52.507 "md_size": 32, 00:23:52.507 "md_interleave": true, 00:23:52.507 "dif_type": 0, 00:23:52.507 "assigned_rate_limits": { 00:23:52.507 "rw_ios_per_sec": 0, 00:23:52.507 "rw_mbytes_per_sec": 0, 00:23:52.507 "r_mbytes_per_sec": 0, 00:23:52.507 "w_mbytes_per_sec": 0 00:23:52.507 }, 00:23:52.507 "claimed": false, 00:23:52.507 "zoned": false, 00:23:52.507 "supported_io_types": { 00:23:52.507 "read": true, 00:23:52.507 "write": true, 00:23:52.507 "unmap": false, 00:23:52.507 "flush": false, 00:23:52.507 "reset": true, 00:23:52.507 "nvme_admin": false, 00:23:52.507 "nvme_io": false, 00:23:52.507 "nvme_io_md": false, 00:23:52.507 "write_zeroes": true, 00:23:52.507 "zcopy": false, 00:23:52.507 "get_zone_info": false, 00:23:52.507 "zone_management": false, 00:23:52.507 "zone_append": false, 00:23:52.507 "compare": false, 00:23:52.507 "compare_and_write": false, 00:23:52.507 "abort": false, 00:23:52.507 "seek_hole": false, 00:23:52.507 "seek_data": false, 00:23:52.507 "copy": false, 00:23:52.507 "nvme_iov_md": false 00:23:52.507 }, 00:23:52.507 "memory_domains": [ 00:23:52.507 { 00:23:52.507 "dma_device_id": "system", 00:23:52.507 "dma_device_type": 1 00:23:52.507 }, 00:23:52.507 { 00:23:52.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.507 "dma_device_type": 2 00:23:52.507 }, 00:23:52.507 { 00:23:52.507 "dma_device_id": "system", 00:23:52.507 "dma_device_type": 1 00:23:52.507 }, 00:23:52.507 { 00:23:52.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.507 "dma_device_type": 2 00:23:52.507 } 00:23:52.507 ], 00:23:52.507 "driver_specific": { 00:23:52.507 "raid": { 00:23:52.507 "uuid": "44ba4191-75b3-463f-8cfa-c03d58b0b853", 00:23:52.507 "strip_size_kb": 0, 00:23:52.507 "state": "online", 00:23:52.507 "raid_level": "raid1", 00:23:52.507 "superblock": true, 00:23:52.507 "num_base_bdevs": 2, 00:23:52.507 "num_base_bdevs_discovered": 2, 00:23:52.507 "num_base_bdevs_operational": 2, 00:23:52.507 "base_bdevs_list": [ 00:23:52.507 { 00:23:52.507 "name": "pt1", 00:23:52.507 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:52.507 "is_configured": true, 00:23:52.507 "data_offset": 256, 00:23:52.508 "data_size": 7936 00:23:52.508 }, 00:23:52.508 { 00:23:52.508 "name": "pt2", 00:23:52.508 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:52.508 "is_configured": true, 00:23:52.508 "data_offset": 256, 00:23:52.508 "data_size": 7936 00:23:52.508 } 00:23:52.508 ] 00:23:52.508 } 00:23:52.508 } 00:23:52.508 }' 00:23:52.508 06:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:52.508 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:52.508 pt2' 00:23:52.508 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:52.508 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:23:52.508 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:52.508 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:52.508 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:52.508 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.508 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:52.508 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.508 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:52.508 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:52.508 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:52.508 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:52.508 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.508 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:52.508 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:52.767 [2024-12-06 06:50:11.206577] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=44ba4191-75b3-463f-8cfa-c03d58b0b853 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 44ba4191-75b3-463f-8cfa-c03d58b0b853 ']' 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:52.767 [2024-12-06 06:50:11.266220] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:52.767 [2024-12-06 06:50:11.266252] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:52.767 [2024-12-06 06:50:11.266349] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:52.767 [2024-12-06 06:50:11.266425] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:52.767 [2024-12-06 06:50:11.266444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:52.767 [2024-12-06 06:50:11.402299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:52.767 [2024-12-06 06:50:11.404834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:52.767 [2024-12-06 06:50:11.404940] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:52.767 [2024-12-06 06:50:11.405020] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:52.767 [2024-12-06 06:50:11.405047] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:52.767 [2024-12-06 06:50:11.405063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:23:52.767 request: 00:23:52.767 { 00:23:52.767 "name": "raid_bdev1", 00:23:52.767 "raid_level": "raid1", 00:23:52.767 "base_bdevs": [ 00:23:52.767 "malloc1", 00:23:52.767 "malloc2" 00:23:52.767 ], 00:23:52.767 "superblock": false, 00:23:52.767 "method": "bdev_raid_create", 00:23:52.767 "req_id": 1 00:23:52.767 } 00:23:52.767 Got JSON-RPC error response 00:23:52.767 response: 00:23:52.767 { 00:23:52.767 "code": -17, 00:23:52.767 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:52.767 } 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:52.767 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:53.026 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:53.026 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.026 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:23:53.026 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:53.026 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.026 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:23:53.026 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:23:53.026 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:53.026 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.026 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:53.026 [2024-12-06 06:50:11.470306] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:53.026 [2024-12-06 06:50:11.470384] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:53.026 [2024-12-06 06:50:11.470410] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:53.026 [2024-12-06 06:50:11.470427] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:53.026 [2024-12-06 06:50:11.472954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:53.026 [2024-12-06 06:50:11.473002] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:53.026 [2024-12-06 06:50:11.473077] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:53.026 [2024-12-06 06:50:11.473153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:53.026 pt1 00:23:53.026 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.026 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:53.026 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:53.026 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:53.026 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:53.026 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:53.026 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:53.026 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:53.026 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:53.026 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:53.026 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:53.026 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:53.026 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.026 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.026 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:53.027 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.027 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:53.027 "name": "raid_bdev1", 00:23:53.027 "uuid": "44ba4191-75b3-463f-8cfa-c03d58b0b853", 00:23:53.027 "strip_size_kb": 0, 00:23:53.027 "state": "configuring", 00:23:53.027 "raid_level": "raid1", 00:23:53.027 "superblock": true, 00:23:53.027 "num_base_bdevs": 2, 00:23:53.027 "num_base_bdevs_discovered": 1, 00:23:53.027 "num_base_bdevs_operational": 2, 00:23:53.027 "base_bdevs_list": [ 00:23:53.027 { 00:23:53.027 "name": "pt1", 00:23:53.027 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:53.027 "is_configured": true, 00:23:53.027 "data_offset": 256, 00:23:53.027 "data_size": 7936 00:23:53.027 }, 00:23:53.027 { 00:23:53.027 "name": null, 00:23:53.027 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:53.027 "is_configured": false, 00:23:53.027 "data_offset": 256, 00:23:53.027 "data_size": 7936 00:23:53.027 } 00:23:53.027 ] 00:23:53.027 }' 00:23:53.027 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:53.027 06:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:53.594 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:23:53.594 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:23:53.594 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:53.594 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:53.594 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.594 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:53.594 [2024-12-06 06:50:12.018434] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:53.594 [2024-12-06 06:50:12.018546] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:53.594 [2024-12-06 06:50:12.018585] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:53.594 [2024-12-06 06:50:12.018604] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:53.594 [2024-12-06 06:50:12.018827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:53.594 [2024-12-06 06:50:12.018858] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:53.594 [2024-12-06 06:50:12.018928] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:53.594 [2024-12-06 06:50:12.018965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:53.594 [2024-12-06 06:50:12.019101] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:53.594 [2024-12-06 06:50:12.019122] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:53.594 [2024-12-06 06:50:12.019220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:53.594 [2024-12-06 06:50:12.019311] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:53.594 [2024-12-06 06:50:12.019331] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:23:53.594 [2024-12-06 06:50:12.019419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:53.594 pt2 00:23:53.594 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.594 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:53.594 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:53.594 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:53.594 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:53.594 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:53.594 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:53.594 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:53.594 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:53.594 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:53.594 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:53.594 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:53.594 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:53.594 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:53.594 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.594 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:53.594 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.594 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.594 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:53.594 "name": "raid_bdev1", 00:23:53.594 "uuid": "44ba4191-75b3-463f-8cfa-c03d58b0b853", 00:23:53.594 "strip_size_kb": 0, 00:23:53.594 "state": "online", 00:23:53.594 "raid_level": "raid1", 00:23:53.594 "superblock": true, 00:23:53.594 "num_base_bdevs": 2, 00:23:53.594 "num_base_bdevs_discovered": 2, 00:23:53.594 "num_base_bdevs_operational": 2, 00:23:53.594 "base_bdevs_list": [ 00:23:53.594 { 00:23:53.594 "name": "pt1", 00:23:53.594 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:53.594 "is_configured": true, 00:23:53.594 "data_offset": 256, 00:23:53.594 "data_size": 7936 00:23:53.594 }, 00:23:53.594 { 00:23:53.594 "name": "pt2", 00:23:53.595 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:53.595 "is_configured": true, 00:23:53.595 "data_offset": 256, 00:23:53.595 "data_size": 7936 00:23:53.595 } 00:23:53.595 ] 00:23:53.595 }' 00:23:53.595 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:53.595 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:54.162 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:23:54.162 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:54.162 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:54.162 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:54.162 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:23:54.162 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:54.162 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:54.162 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:54.162 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.162 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:54.162 [2024-12-06 06:50:12.570943] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:54.162 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.162 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:54.162 "name": "raid_bdev1", 00:23:54.162 "aliases": [ 00:23:54.162 "44ba4191-75b3-463f-8cfa-c03d58b0b853" 00:23:54.162 ], 00:23:54.162 "product_name": "Raid Volume", 00:23:54.162 "block_size": 4128, 00:23:54.162 "num_blocks": 7936, 00:23:54.162 "uuid": "44ba4191-75b3-463f-8cfa-c03d58b0b853", 00:23:54.162 "md_size": 32, 00:23:54.162 "md_interleave": true, 00:23:54.163 "dif_type": 0, 00:23:54.163 "assigned_rate_limits": { 00:23:54.163 "rw_ios_per_sec": 0, 00:23:54.163 "rw_mbytes_per_sec": 0, 00:23:54.163 "r_mbytes_per_sec": 0, 00:23:54.163 "w_mbytes_per_sec": 0 00:23:54.163 }, 00:23:54.163 "claimed": false, 00:23:54.163 "zoned": false, 00:23:54.163 "supported_io_types": { 00:23:54.163 "read": true, 00:23:54.163 "write": true, 00:23:54.163 "unmap": false, 00:23:54.163 "flush": false, 00:23:54.163 "reset": true, 00:23:54.163 "nvme_admin": false, 00:23:54.163 "nvme_io": false, 00:23:54.163 "nvme_io_md": false, 00:23:54.163 "write_zeroes": true, 00:23:54.163 "zcopy": false, 00:23:54.163 "get_zone_info": false, 00:23:54.163 "zone_management": false, 00:23:54.163 "zone_append": false, 00:23:54.163 "compare": false, 00:23:54.163 "compare_and_write": false, 00:23:54.163 "abort": false, 00:23:54.163 "seek_hole": false, 00:23:54.163 "seek_data": false, 00:23:54.163 "copy": false, 00:23:54.163 "nvme_iov_md": false 00:23:54.163 }, 00:23:54.163 "memory_domains": [ 00:23:54.163 { 00:23:54.163 "dma_device_id": "system", 00:23:54.163 "dma_device_type": 1 00:23:54.163 }, 00:23:54.163 { 00:23:54.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:54.163 "dma_device_type": 2 00:23:54.163 }, 00:23:54.163 { 00:23:54.163 "dma_device_id": "system", 00:23:54.163 "dma_device_type": 1 00:23:54.163 }, 00:23:54.163 { 00:23:54.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:54.163 "dma_device_type": 2 00:23:54.163 } 00:23:54.163 ], 00:23:54.163 "driver_specific": { 00:23:54.163 "raid": { 00:23:54.163 "uuid": "44ba4191-75b3-463f-8cfa-c03d58b0b853", 00:23:54.163 "strip_size_kb": 0, 00:23:54.163 "state": "online", 00:23:54.163 "raid_level": "raid1", 00:23:54.163 "superblock": true, 00:23:54.163 "num_base_bdevs": 2, 00:23:54.163 "num_base_bdevs_discovered": 2, 00:23:54.163 "num_base_bdevs_operational": 2, 00:23:54.163 "base_bdevs_list": [ 00:23:54.163 { 00:23:54.163 "name": "pt1", 00:23:54.163 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:54.163 "is_configured": true, 00:23:54.163 "data_offset": 256, 00:23:54.163 "data_size": 7936 00:23:54.163 }, 00:23:54.163 { 00:23:54.163 "name": "pt2", 00:23:54.163 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:54.163 "is_configured": true, 00:23:54.163 "data_offset": 256, 00:23:54.163 "data_size": 7936 00:23:54.163 } 00:23:54.163 ] 00:23:54.163 } 00:23:54.163 } 00:23:54.163 }' 00:23:54.163 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:54.163 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:54.163 pt2' 00:23:54.163 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:54.163 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:23:54.163 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:54.163 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:54.163 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:54.163 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.163 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:54.163 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.163 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:54.163 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:54.163 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:54.163 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:54.163 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:54.163 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.163 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:54.163 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.422 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:54.422 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:54.422 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:54.422 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.422 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:54.422 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:23:54.422 [2024-12-06 06:50:12.839048] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:54.422 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.422 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 44ba4191-75b3-463f-8cfa-c03d58b0b853 '!=' 44ba4191-75b3-463f-8cfa-c03d58b0b853 ']' 00:23:54.422 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:23:54.422 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:54.422 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:23:54.422 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:23:54.422 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.422 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:54.423 [2024-12-06 06:50:12.886760] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:54.423 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.423 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:54.423 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:54.423 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:54.423 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:54.423 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:54.423 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:54.423 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:54.423 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:54.423 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:54.423 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:54.423 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:54.423 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:54.423 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.423 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:54.423 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.423 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:54.423 "name": "raid_bdev1", 00:23:54.423 "uuid": "44ba4191-75b3-463f-8cfa-c03d58b0b853", 00:23:54.423 "strip_size_kb": 0, 00:23:54.423 "state": "online", 00:23:54.423 "raid_level": "raid1", 00:23:54.423 "superblock": true, 00:23:54.423 "num_base_bdevs": 2, 00:23:54.423 "num_base_bdevs_discovered": 1, 00:23:54.423 "num_base_bdevs_operational": 1, 00:23:54.423 "base_bdevs_list": [ 00:23:54.423 { 00:23:54.423 "name": null, 00:23:54.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.423 "is_configured": false, 00:23:54.423 "data_offset": 0, 00:23:54.423 "data_size": 7936 00:23:54.423 }, 00:23:54.423 { 00:23:54.423 "name": "pt2", 00:23:54.423 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:54.423 "is_configured": true, 00:23:54.423 "data_offset": 256, 00:23:54.423 "data_size": 7936 00:23:54.423 } 00:23:54.423 ] 00:23:54.423 }' 00:23:54.423 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:54.423 06:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:54.990 [2024-12-06 06:50:13.394841] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:54.990 [2024-12-06 06:50:13.394883] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:54.990 [2024-12-06 06:50:13.395041] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:54.990 [2024-12-06 06:50:13.395109] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:54.990 [2024-12-06 06:50:13.395128] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:54.990 [2024-12-06 06:50:13.466846] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:54.990 [2024-12-06 06:50:13.466965] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:54.990 [2024-12-06 06:50:13.467018] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:23:54.990 [2024-12-06 06:50:13.467039] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:54.990 [2024-12-06 06:50:13.469804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:54.990 [2024-12-06 06:50:13.469856] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:54.990 [2024-12-06 06:50:13.469926] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:54.990 [2024-12-06 06:50:13.469991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:54.990 [2024-12-06 06:50:13.470091] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:54.990 [2024-12-06 06:50:13.470113] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:54.990 [2024-12-06 06:50:13.470232] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:54.990 [2024-12-06 06:50:13.470324] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:54.990 [2024-12-06 06:50:13.470338] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:23:54.990 [2024-12-06 06:50:13.470424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:54.990 pt2 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.990 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:54.990 "name": "raid_bdev1", 00:23:54.990 "uuid": "44ba4191-75b3-463f-8cfa-c03d58b0b853", 00:23:54.990 "strip_size_kb": 0, 00:23:54.990 "state": "online", 00:23:54.990 "raid_level": "raid1", 00:23:54.990 "superblock": true, 00:23:54.990 "num_base_bdevs": 2, 00:23:54.990 "num_base_bdevs_discovered": 1, 00:23:54.990 "num_base_bdevs_operational": 1, 00:23:54.990 "base_bdevs_list": [ 00:23:54.990 { 00:23:54.990 "name": null, 00:23:54.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.990 "is_configured": false, 00:23:54.990 "data_offset": 256, 00:23:54.990 "data_size": 7936 00:23:54.990 }, 00:23:54.990 { 00:23:54.990 "name": "pt2", 00:23:54.990 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:54.990 "is_configured": true, 00:23:54.990 "data_offset": 256, 00:23:54.990 "data_size": 7936 00:23:54.990 } 00:23:54.990 ] 00:23:54.990 }' 00:23:54.991 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:54.991 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:55.559 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:55.559 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.559 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:55.559 [2024-12-06 06:50:13.970967] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:55.559 [2024-12-06 06:50:13.971017] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:55.559 [2024-12-06 06:50:13.971113] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:55.559 [2024-12-06 06:50:13.971186] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:55.559 [2024-12-06 06:50:13.971212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:23:55.559 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.559 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:55.559 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:23:55.559 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.559 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:55.559 06:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.559 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:23:55.559 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:23:55.559 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:23:55.559 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:55.559 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.559 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:55.559 [2024-12-06 06:50:14.055051] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:55.559 [2024-12-06 06:50:14.055126] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:55.559 [2024-12-06 06:50:14.055158] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:23:55.559 [2024-12-06 06:50:14.055173] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:55.559 [2024-12-06 06:50:14.057739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:55.559 [2024-12-06 06:50:14.057782] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:55.559 [2024-12-06 06:50:14.057862] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:55.559 [2024-12-06 06:50:14.057923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:55.559 [2024-12-06 06:50:14.058060] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:55.559 [2024-12-06 06:50:14.058078] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:55.559 [2024-12-06 06:50:14.058106] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:23:55.559 [2024-12-06 06:50:14.058179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:55.559 [2024-12-06 06:50:14.058298] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:23:55.559 [2024-12-06 06:50:14.058314] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:55.559 [2024-12-06 06:50:14.058400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:55.560 [2024-12-06 06:50:14.058486] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:23:55.560 [2024-12-06 06:50:14.058504] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:23:55.560 [2024-12-06 06:50:14.058631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:55.560 pt1 00:23:55.560 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.560 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:23:55.560 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:55.560 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:55.560 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:55.560 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:55.560 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:55.560 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:55.560 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:55.560 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:55.560 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:55.560 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:55.560 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:55.560 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.560 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.560 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:55.560 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.560 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:55.560 "name": "raid_bdev1", 00:23:55.560 "uuid": "44ba4191-75b3-463f-8cfa-c03d58b0b853", 00:23:55.560 "strip_size_kb": 0, 00:23:55.560 "state": "online", 00:23:55.560 "raid_level": "raid1", 00:23:55.560 "superblock": true, 00:23:55.560 "num_base_bdevs": 2, 00:23:55.560 "num_base_bdevs_discovered": 1, 00:23:55.560 "num_base_bdevs_operational": 1, 00:23:55.560 "base_bdevs_list": [ 00:23:55.560 { 00:23:55.560 "name": null, 00:23:55.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.560 "is_configured": false, 00:23:55.560 "data_offset": 256, 00:23:55.560 "data_size": 7936 00:23:55.560 }, 00:23:55.560 { 00:23:55.560 "name": "pt2", 00:23:55.560 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:55.560 "is_configured": true, 00:23:55.560 "data_offset": 256, 00:23:55.560 "data_size": 7936 00:23:55.560 } 00:23:55.560 ] 00:23:55.560 }' 00:23:55.560 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:55.560 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:56.127 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:56.127 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:23:56.127 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.127 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:56.127 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.127 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:23:56.127 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:23:56.127 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:56.127 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.127 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:56.128 [2024-12-06 06:50:14.659473] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:56.128 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.128 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 44ba4191-75b3-463f-8cfa-c03d58b0b853 '!=' 44ba4191-75b3-463f-8cfa-c03d58b0b853 ']' 00:23:56.128 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89361 00:23:56.128 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89361 ']' 00:23:56.128 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89361 00:23:56.128 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:23:56.128 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:56.128 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89361 00:23:56.128 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:56.128 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:56.128 killing process with pid 89361 00:23:56.128 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89361' 00:23:56.128 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89361 00:23:56.128 [2024-12-06 06:50:14.728185] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:56.128 06:50:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89361 00:23:56.128 [2024-12-06 06:50:14.728312] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:56.128 [2024-12-06 06:50:14.728382] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:56.128 [2024-12-06 06:50:14.728405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:23:56.386 [2024-12-06 06:50:14.912908] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:57.321 06:50:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:23:57.321 00:23:57.321 real 0m6.742s 00:23:57.321 user 0m10.731s 00:23:57.321 sys 0m0.961s 00:23:57.321 06:50:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:57.321 06:50:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:57.321 ************************************ 00:23:57.321 END TEST raid_superblock_test_md_interleaved 00:23:57.321 ************************************ 00:23:57.578 06:50:16 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:23:57.578 06:50:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:23:57.578 06:50:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:57.578 06:50:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:57.578 ************************************ 00:23:57.578 START TEST raid_rebuild_test_sb_md_interleaved 00:23:57.578 ************************************ 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89690 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89690 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89690 ']' 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:57.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:57.578 06:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:57.578 [2024-12-06 06:50:16.159545] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:23:57.578 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:57.578 Zero copy mechanism will not be used. 00:23:57.578 [2024-12-06 06:50:16.159729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89690 ] 00:23:57.836 [2024-12-06 06:50:16.350401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.094 [2024-12-06 06:50:16.483184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.094 [2024-12-06 06:50:16.688755] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:58.094 [2024-12-06 06:50:16.688849] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:58.662 BaseBdev1_malloc 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:58.662 [2024-12-06 06:50:17.164081] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:58.662 [2024-12-06 06:50:17.164152] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:58.662 [2024-12-06 06:50:17.164184] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:58.662 [2024-12-06 06:50:17.164202] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:58.662 [2024-12-06 06:50:17.166684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:58.662 [2024-12-06 06:50:17.166734] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:58.662 BaseBdev1 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:58.662 BaseBdev2_malloc 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:58.662 [2024-12-06 06:50:17.216950] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:58.662 [2024-12-06 06:50:17.217029] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:58.662 [2024-12-06 06:50:17.217067] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:58.662 [2024-12-06 06:50:17.217084] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:58.662 [2024-12-06 06:50:17.219679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:58.662 [2024-12-06 06:50:17.219738] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:58.662 BaseBdev2 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:58.662 spare_malloc 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:58.662 spare_delay 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:58.662 [2024-12-06 06:50:17.291717] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:58.662 [2024-12-06 06:50:17.291792] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:58.662 [2024-12-06 06:50:17.291825] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:58.662 [2024-12-06 06:50:17.291843] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:58.662 [2024-12-06 06:50:17.294377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:58.662 [2024-12-06 06:50:17.294442] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:58.662 spare 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.662 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:58.662 [2024-12-06 06:50:17.299786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:58.662 [2024-12-06 06:50:17.302259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:58.662 [2024-12-06 06:50:17.302548] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:58.663 [2024-12-06 06:50:17.302572] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:58.663 [2024-12-06 06:50:17.302670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:58.663 [2024-12-06 06:50:17.302781] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:58.663 [2024-12-06 06:50:17.302794] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:58.663 [2024-12-06 06:50:17.302888] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:58.663 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.663 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:58.663 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:58.663 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:58.663 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:58.663 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:58.663 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:58.663 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:58.663 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:58.663 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:58.663 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:58.922 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:58.922 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.922 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.922 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:58.922 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.922 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:58.922 "name": "raid_bdev1", 00:23:58.922 "uuid": "419ad046-02f1-48ec-a0dd-47e904b204ab", 00:23:58.922 "strip_size_kb": 0, 00:23:58.922 "state": "online", 00:23:58.922 "raid_level": "raid1", 00:23:58.922 "superblock": true, 00:23:58.922 "num_base_bdevs": 2, 00:23:58.922 "num_base_bdevs_discovered": 2, 00:23:58.922 "num_base_bdevs_operational": 2, 00:23:58.922 "base_bdevs_list": [ 00:23:58.922 { 00:23:58.922 "name": "BaseBdev1", 00:23:58.922 "uuid": "1c8204e3-8e72-5a8d-94af-96cbd0145fdc", 00:23:58.922 "is_configured": true, 00:23:58.922 "data_offset": 256, 00:23:58.922 "data_size": 7936 00:23:58.922 }, 00:23:58.922 { 00:23:58.922 "name": "BaseBdev2", 00:23:58.922 "uuid": "a607b04e-12e8-5dc5-8052-0f842abc322e", 00:23:58.922 "is_configured": true, 00:23:58.922 "data_offset": 256, 00:23:58.922 "data_size": 7936 00:23:58.922 } 00:23:58.922 ] 00:23:58.922 }' 00:23:58.922 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:58.922 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:59.181 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:59.181 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:59.181 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.181 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:59.181 [2024-12-06 06:50:17.816750] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:59.440 [2024-12-06 06:50:17.912361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:59.440 "name": "raid_bdev1", 00:23:59.440 "uuid": "419ad046-02f1-48ec-a0dd-47e904b204ab", 00:23:59.440 "strip_size_kb": 0, 00:23:59.440 "state": "online", 00:23:59.440 "raid_level": "raid1", 00:23:59.440 "superblock": true, 00:23:59.440 "num_base_bdevs": 2, 00:23:59.440 "num_base_bdevs_discovered": 1, 00:23:59.440 "num_base_bdevs_operational": 1, 00:23:59.440 "base_bdevs_list": [ 00:23:59.440 { 00:23:59.440 "name": null, 00:23:59.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.440 "is_configured": false, 00:23:59.440 "data_offset": 0, 00:23:59.440 "data_size": 7936 00:23:59.440 }, 00:23:59.440 { 00:23:59.440 "name": "BaseBdev2", 00:23:59.440 "uuid": "a607b04e-12e8-5dc5-8052-0f842abc322e", 00:23:59.440 "is_configured": true, 00:23:59.440 "data_offset": 256, 00:23:59.440 "data_size": 7936 00:23:59.440 } 00:23:59.440 ] 00:23:59.440 }' 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:59.440 06:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:00.034 06:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:00.034 06:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.034 06:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:00.034 [2024-12-06 06:50:18.400609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:00.034 [2024-12-06 06:50:18.418280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:00.034 06:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.034 06:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:00.034 [2024-12-06 06:50:18.421306] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:00.969 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:00.969 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:00.969 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:00.969 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:00.969 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:00.969 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:00.969 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.969 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:00.969 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.969 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.969 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:00.969 "name": "raid_bdev1", 00:24:00.969 "uuid": "419ad046-02f1-48ec-a0dd-47e904b204ab", 00:24:00.969 "strip_size_kb": 0, 00:24:00.969 "state": "online", 00:24:00.969 "raid_level": "raid1", 00:24:00.969 "superblock": true, 00:24:00.969 "num_base_bdevs": 2, 00:24:00.969 "num_base_bdevs_discovered": 2, 00:24:00.969 "num_base_bdevs_operational": 2, 00:24:00.969 "process": { 00:24:00.969 "type": "rebuild", 00:24:00.969 "target": "spare", 00:24:00.969 "progress": { 00:24:00.969 "blocks": 2560, 00:24:00.969 "percent": 32 00:24:00.969 } 00:24:00.969 }, 00:24:00.969 "base_bdevs_list": [ 00:24:00.969 { 00:24:00.969 "name": "spare", 00:24:00.969 "uuid": "d6365da6-e24b-524e-81c3-0d70069900db", 00:24:00.970 "is_configured": true, 00:24:00.970 "data_offset": 256, 00:24:00.970 "data_size": 7936 00:24:00.970 }, 00:24:00.970 { 00:24:00.970 "name": "BaseBdev2", 00:24:00.970 "uuid": "a607b04e-12e8-5dc5-8052-0f842abc322e", 00:24:00.970 "is_configured": true, 00:24:00.970 "data_offset": 256, 00:24:00.970 "data_size": 7936 00:24:00.970 } 00:24:00.970 ] 00:24:00.970 }' 00:24:00.970 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:00.970 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:00.970 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:00.970 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:00.970 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:00.970 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.970 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:00.970 [2024-12-06 06:50:19.590784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:01.228 [2024-12-06 06:50:19.630812] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:01.228 [2024-12-06 06:50:19.630897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:01.228 [2024-12-06 06:50:19.630947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:01.228 [2024-12-06 06:50:19.630977] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:01.228 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.228 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:01.228 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:01.228 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:01.228 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:01.228 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:01.228 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:01.228 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:01.228 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:01.228 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:01.228 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:01.228 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:01.228 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.228 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.229 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:01.229 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.229 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:01.229 "name": "raid_bdev1", 00:24:01.229 "uuid": "419ad046-02f1-48ec-a0dd-47e904b204ab", 00:24:01.229 "strip_size_kb": 0, 00:24:01.229 "state": "online", 00:24:01.229 "raid_level": "raid1", 00:24:01.229 "superblock": true, 00:24:01.229 "num_base_bdevs": 2, 00:24:01.229 "num_base_bdevs_discovered": 1, 00:24:01.229 "num_base_bdevs_operational": 1, 00:24:01.229 "base_bdevs_list": [ 00:24:01.229 { 00:24:01.229 "name": null, 00:24:01.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.229 "is_configured": false, 00:24:01.229 "data_offset": 0, 00:24:01.229 "data_size": 7936 00:24:01.229 }, 00:24:01.229 { 00:24:01.229 "name": "BaseBdev2", 00:24:01.229 "uuid": "a607b04e-12e8-5dc5-8052-0f842abc322e", 00:24:01.229 "is_configured": true, 00:24:01.229 "data_offset": 256, 00:24:01.229 "data_size": 7936 00:24:01.229 } 00:24:01.229 ] 00:24:01.229 }' 00:24:01.229 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:01.229 06:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:01.796 06:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:01.796 06:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:01.796 06:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:01.796 06:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:01.796 06:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:01.796 06:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:01.796 06:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.796 06:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:01.796 06:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.796 06:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.796 06:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:01.796 "name": "raid_bdev1", 00:24:01.796 "uuid": "419ad046-02f1-48ec-a0dd-47e904b204ab", 00:24:01.796 "strip_size_kb": 0, 00:24:01.796 "state": "online", 00:24:01.796 "raid_level": "raid1", 00:24:01.796 "superblock": true, 00:24:01.796 "num_base_bdevs": 2, 00:24:01.796 "num_base_bdevs_discovered": 1, 00:24:01.796 "num_base_bdevs_operational": 1, 00:24:01.796 "base_bdevs_list": [ 00:24:01.796 { 00:24:01.796 "name": null, 00:24:01.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.796 "is_configured": false, 00:24:01.796 "data_offset": 0, 00:24:01.796 "data_size": 7936 00:24:01.796 }, 00:24:01.796 { 00:24:01.796 "name": "BaseBdev2", 00:24:01.796 "uuid": "a607b04e-12e8-5dc5-8052-0f842abc322e", 00:24:01.796 "is_configured": true, 00:24:01.796 "data_offset": 256, 00:24:01.796 "data_size": 7936 00:24:01.796 } 00:24:01.796 ] 00:24:01.796 }' 00:24:01.796 06:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:01.796 06:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:01.796 06:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:01.796 06:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:01.796 06:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:01.796 06:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.796 06:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:01.796 [2024-12-06 06:50:20.383889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:01.796 [2024-12-06 06:50:20.400348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:01.796 06:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.796 06:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:01.796 [2024-12-06 06:50:20.402852] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:03.172 "name": "raid_bdev1", 00:24:03.172 "uuid": "419ad046-02f1-48ec-a0dd-47e904b204ab", 00:24:03.172 "strip_size_kb": 0, 00:24:03.172 "state": "online", 00:24:03.172 "raid_level": "raid1", 00:24:03.172 "superblock": true, 00:24:03.172 "num_base_bdevs": 2, 00:24:03.172 "num_base_bdevs_discovered": 2, 00:24:03.172 "num_base_bdevs_operational": 2, 00:24:03.172 "process": { 00:24:03.172 "type": "rebuild", 00:24:03.172 "target": "spare", 00:24:03.172 "progress": { 00:24:03.172 "blocks": 2560, 00:24:03.172 "percent": 32 00:24:03.172 } 00:24:03.172 }, 00:24:03.172 "base_bdevs_list": [ 00:24:03.172 { 00:24:03.172 "name": "spare", 00:24:03.172 "uuid": "d6365da6-e24b-524e-81c3-0d70069900db", 00:24:03.172 "is_configured": true, 00:24:03.172 "data_offset": 256, 00:24:03.172 "data_size": 7936 00:24:03.172 }, 00:24:03.172 { 00:24:03.172 "name": "BaseBdev2", 00:24:03.172 "uuid": "a607b04e-12e8-5dc5-8052-0f842abc322e", 00:24:03.172 "is_configured": true, 00:24:03.172 "data_offset": 256, 00:24:03.172 "data_size": 7936 00:24:03.172 } 00:24:03.172 ] 00:24:03.172 }' 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:24:03.172 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=801 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:03.172 "name": "raid_bdev1", 00:24:03.172 "uuid": "419ad046-02f1-48ec-a0dd-47e904b204ab", 00:24:03.172 "strip_size_kb": 0, 00:24:03.172 "state": "online", 00:24:03.172 "raid_level": "raid1", 00:24:03.172 "superblock": true, 00:24:03.172 "num_base_bdevs": 2, 00:24:03.172 "num_base_bdevs_discovered": 2, 00:24:03.172 "num_base_bdevs_operational": 2, 00:24:03.172 "process": { 00:24:03.172 "type": "rebuild", 00:24:03.172 "target": "spare", 00:24:03.172 "progress": { 00:24:03.172 "blocks": 2816, 00:24:03.172 "percent": 35 00:24:03.172 } 00:24:03.172 }, 00:24:03.172 "base_bdevs_list": [ 00:24:03.172 { 00:24:03.172 "name": "spare", 00:24:03.172 "uuid": "d6365da6-e24b-524e-81c3-0d70069900db", 00:24:03.172 "is_configured": true, 00:24:03.172 "data_offset": 256, 00:24:03.172 "data_size": 7936 00:24:03.172 }, 00:24:03.172 { 00:24:03.172 "name": "BaseBdev2", 00:24:03.172 "uuid": "a607b04e-12e8-5dc5-8052-0f842abc322e", 00:24:03.172 "is_configured": true, 00:24:03.172 "data_offset": 256, 00:24:03.172 "data_size": 7936 00:24:03.172 } 00:24:03.172 ] 00:24:03.172 }' 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:03.172 06:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:04.107 06:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:04.107 06:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:04.107 06:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:04.107 06:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:04.107 06:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:04.107 06:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:04.107 06:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:04.108 06:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.108 06:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.108 06:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:04.366 06:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.366 06:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:04.366 "name": "raid_bdev1", 00:24:04.366 "uuid": "419ad046-02f1-48ec-a0dd-47e904b204ab", 00:24:04.366 "strip_size_kb": 0, 00:24:04.366 "state": "online", 00:24:04.366 "raid_level": "raid1", 00:24:04.366 "superblock": true, 00:24:04.366 "num_base_bdevs": 2, 00:24:04.366 "num_base_bdevs_discovered": 2, 00:24:04.366 "num_base_bdevs_operational": 2, 00:24:04.366 "process": { 00:24:04.366 "type": "rebuild", 00:24:04.366 "target": "spare", 00:24:04.367 "progress": { 00:24:04.367 "blocks": 5888, 00:24:04.367 "percent": 74 00:24:04.367 } 00:24:04.367 }, 00:24:04.367 "base_bdevs_list": [ 00:24:04.367 { 00:24:04.367 "name": "spare", 00:24:04.367 "uuid": "d6365da6-e24b-524e-81c3-0d70069900db", 00:24:04.367 "is_configured": true, 00:24:04.367 "data_offset": 256, 00:24:04.367 "data_size": 7936 00:24:04.367 }, 00:24:04.367 { 00:24:04.367 "name": "BaseBdev2", 00:24:04.367 "uuid": "a607b04e-12e8-5dc5-8052-0f842abc322e", 00:24:04.367 "is_configured": true, 00:24:04.367 "data_offset": 256, 00:24:04.367 "data_size": 7936 00:24:04.367 } 00:24:04.367 ] 00:24:04.367 }' 00:24:04.367 06:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:04.367 06:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:04.367 06:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:04.367 06:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:04.367 06:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:04.934 [2024-12-06 06:50:23.525851] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:04.934 [2024-12-06 06:50:23.526000] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:04.934 [2024-12-06 06:50:23.526208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:05.500 06:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:05.500 06:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:05.500 06:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:05.500 06:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:05.500 06:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:05.500 06:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:05.500 06:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:05.500 06:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.500 06:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.500 06:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:05.500 06:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.500 06:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:05.500 "name": "raid_bdev1", 00:24:05.500 "uuid": "419ad046-02f1-48ec-a0dd-47e904b204ab", 00:24:05.500 "strip_size_kb": 0, 00:24:05.500 "state": "online", 00:24:05.500 "raid_level": "raid1", 00:24:05.500 "superblock": true, 00:24:05.500 "num_base_bdevs": 2, 00:24:05.500 "num_base_bdevs_discovered": 2, 00:24:05.500 "num_base_bdevs_operational": 2, 00:24:05.500 "base_bdevs_list": [ 00:24:05.500 { 00:24:05.500 "name": "spare", 00:24:05.500 "uuid": "d6365da6-e24b-524e-81c3-0d70069900db", 00:24:05.500 "is_configured": true, 00:24:05.500 "data_offset": 256, 00:24:05.500 "data_size": 7936 00:24:05.500 }, 00:24:05.500 { 00:24:05.500 "name": "BaseBdev2", 00:24:05.500 "uuid": "a607b04e-12e8-5dc5-8052-0f842abc322e", 00:24:05.500 "is_configured": true, 00:24:05.500 "data_offset": 256, 00:24:05.500 "data_size": 7936 00:24:05.500 } 00:24:05.500 ] 00:24:05.500 }' 00:24:05.500 06:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:05.500 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:05.500 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:05.500 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:05.500 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:24:05.500 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:05.500 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:05.500 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:05.500 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:05.500 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:05.500 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:05.500 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.500 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:05.500 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.500 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.500 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:05.500 "name": "raid_bdev1", 00:24:05.500 "uuid": "419ad046-02f1-48ec-a0dd-47e904b204ab", 00:24:05.500 "strip_size_kb": 0, 00:24:05.500 "state": "online", 00:24:05.500 "raid_level": "raid1", 00:24:05.500 "superblock": true, 00:24:05.500 "num_base_bdevs": 2, 00:24:05.500 "num_base_bdevs_discovered": 2, 00:24:05.500 "num_base_bdevs_operational": 2, 00:24:05.500 "base_bdevs_list": [ 00:24:05.500 { 00:24:05.500 "name": "spare", 00:24:05.500 "uuid": "d6365da6-e24b-524e-81c3-0d70069900db", 00:24:05.500 "is_configured": true, 00:24:05.500 "data_offset": 256, 00:24:05.500 "data_size": 7936 00:24:05.500 }, 00:24:05.500 { 00:24:05.500 "name": "BaseBdev2", 00:24:05.500 "uuid": "a607b04e-12e8-5dc5-8052-0f842abc322e", 00:24:05.500 "is_configured": true, 00:24:05.500 "data_offset": 256, 00:24:05.500 "data_size": 7936 00:24:05.500 } 00:24:05.500 ] 00:24:05.500 }' 00:24:05.500 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:05.758 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:05.758 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:05.758 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:05.758 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:05.758 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:05.758 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:05.758 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:05.758 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:05.758 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:05.758 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:05.758 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:05.758 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:05.758 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:05.758 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.758 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:05.758 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.758 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:05.758 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.758 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:05.758 "name": "raid_bdev1", 00:24:05.758 "uuid": "419ad046-02f1-48ec-a0dd-47e904b204ab", 00:24:05.758 "strip_size_kb": 0, 00:24:05.758 "state": "online", 00:24:05.758 "raid_level": "raid1", 00:24:05.758 "superblock": true, 00:24:05.758 "num_base_bdevs": 2, 00:24:05.758 "num_base_bdevs_discovered": 2, 00:24:05.758 "num_base_bdevs_operational": 2, 00:24:05.758 "base_bdevs_list": [ 00:24:05.758 { 00:24:05.758 "name": "spare", 00:24:05.758 "uuid": "d6365da6-e24b-524e-81c3-0d70069900db", 00:24:05.758 "is_configured": true, 00:24:05.758 "data_offset": 256, 00:24:05.758 "data_size": 7936 00:24:05.758 }, 00:24:05.758 { 00:24:05.758 "name": "BaseBdev2", 00:24:05.758 "uuid": "a607b04e-12e8-5dc5-8052-0f842abc322e", 00:24:05.758 "is_configured": true, 00:24:05.758 "data_offset": 256, 00:24:05.758 "data_size": 7936 00:24:05.758 } 00:24:05.758 ] 00:24:05.758 }' 00:24:05.758 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:05.758 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:06.326 [2024-12-06 06:50:24.722959] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:06.326 [2024-12-06 06:50:24.723029] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:06.326 [2024-12-06 06:50:24.723136] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:06.326 [2024-12-06 06:50:24.723240] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:06.326 [2024-12-06 06:50:24.723257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:06.326 [2024-12-06 06:50:24.794933] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:06.326 [2024-12-06 06:50:24.795015] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:06.326 [2024-12-06 06:50:24.795048] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:24:06.326 [2024-12-06 06:50:24.795063] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:06.326 [2024-12-06 06:50:24.797647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:06.326 [2024-12-06 06:50:24.797692] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:06.326 [2024-12-06 06:50:24.797791] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:06.326 [2024-12-06 06:50:24.797853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:06.326 [2024-12-06 06:50:24.798015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:06.326 spare 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:06.326 [2024-12-06 06:50:24.898147] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:06.326 [2024-12-06 06:50:24.898233] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:24:06.326 [2024-12-06 06:50:24.898413] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:24:06.326 [2024-12-06 06:50:24.898577] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:06.326 [2024-12-06 06:50:24.898598] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:06.326 [2024-12-06 06:50:24.898732] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:06.326 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:06.327 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.327 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.327 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:06.327 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.327 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:06.327 "name": "raid_bdev1", 00:24:06.327 "uuid": "419ad046-02f1-48ec-a0dd-47e904b204ab", 00:24:06.327 "strip_size_kb": 0, 00:24:06.327 "state": "online", 00:24:06.327 "raid_level": "raid1", 00:24:06.327 "superblock": true, 00:24:06.327 "num_base_bdevs": 2, 00:24:06.327 "num_base_bdevs_discovered": 2, 00:24:06.327 "num_base_bdevs_operational": 2, 00:24:06.327 "base_bdevs_list": [ 00:24:06.327 { 00:24:06.327 "name": "spare", 00:24:06.327 "uuid": "d6365da6-e24b-524e-81c3-0d70069900db", 00:24:06.327 "is_configured": true, 00:24:06.327 "data_offset": 256, 00:24:06.327 "data_size": 7936 00:24:06.327 }, 00:24:06.327 { 00:24:06.327 "name": "BaseBdev2", 00:24:06.327 "uuid": "a607b04e-12e8-5dc5-8052-0f842abc322e", 00:24:06.327 "is_configured": true, 00:24:06.327 "data_offset": 256, 00:24:06.327 "data_size": 7936 00:24:06.327 } 00:24:06.327 ] 00:24:06.327 }' 00:24:06.327 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:06.327 06:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:06.894 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:06.894 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:06.894 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:06.894 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:06.894 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:06.894 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:06.894 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.894 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.894 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:06.894 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.894 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:06.894 "name": "raid_bdev1", 00:24:06.894 "uuid": "419ad046-02f1-48ec-a0dd-47e904b204ab", 00:24:06.894 "strip_size_kb": 0, 00:24:06.894 "state": "online", 00:24:06.894 "raid_level": "raid1", 00:24:06.894 "superblock": true, 00:24:06.894 "num_base_bdevs": 2, 00:24:06.894 "num_base_bdevs_discovered": 2, 00:24:06.894 "num_base_bdevs_operational": 2, 00:24:06.894 "base_bdevs_list": [ 00:24:06.894 { 00:24:06.894 "name": "spare", 00:24:06.894 "uuid": "d6365da6-e24b-524e-81c3-0d70069900db", 00:24:06.894 "is_configured": true, 00:24:06.894 "data_offset": 256, 00:24:06.894 "data_size": 7936 00:24:06.894 }, 00:24:06.894 { 00:24:06.894 "name": "BaseBdev2", 00:24:06.894 "uuid": "a607b04e-12e8-5dc5-8052-0f842abc322e", 00:24:06.894 "is_configured": true, 00:24:06.894 "data_offset": 256, 00:24:06.894 "data_size": 7936 00:24:06.894 } 00:24:06.894 ] 00:24:06.894 }' 00:24:06.894 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:06.894 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:06.894 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:07.152 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:07.152 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:07.152 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:07.152 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.152 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:07.152 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.152 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:24:07.152 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:07.152 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.152 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:07.152 [2024-12-06 06:50:25.607461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:07.152 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.152 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:07.152 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:07.152 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:07.152 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:07.152 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:07.152 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:07.152 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:07.152 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:07.152 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:07.152 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:07.152 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:07.152 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.152 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:07.152 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.152 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.152 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:07.152 "name": "raid_bdev1", 00:24:07.152 "uuid": "419ad046-02f1-48ec-a0dd-47e904b204ab", 00:24:07.152 "strip_size_kb": 0, 00:24:07.152 "state": "online", 00:24:07.152 "raid_level": "raid1", 00:24:07.152 "superblock": true, 00:24:07.152 "num_base_bdevs": 2, 00:24:07.152 "num_base_bdevs_discovered": 1, 00:24:07.152 "num_base_bdevs_operational": 1, 00:24:07.152 "base_bdevs_list": [ 00:24:07.152 { 00:24:07.152 "name": null, 00:24:07.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.152 "is_configured": false, 00:24:07.152 "data_offset": 0, 00:24:07.152 "data_size": 7936 00:24:07.152 }, 00:24:07.152 { 00:24:07.152 "name": "BaseBdev2", 00:24:07.152 "uuid": "a607b04e-12e8-5dc5-8052-0f842abc322e", 00:24:07.152 "is_configured": true, 00:24:07.152 "data_offset": 256, 00:24:07.152 "data_size": 7936 00:24:07.152 } 00:24:07.152 ] 00:24:07.152 }' 00:24:07.152 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:07.152 06:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:07.718 06:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:07.718 06:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.718 06:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:07.718 [2024-12-06 06:50:26.115557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:07.718 [2024-12-06 06:50:26.115826] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:07.718 [2024-12-06 06:50:26.115855] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:07.718 [2024-12-06 06:50:26.115917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:07.718 [2024-12-06 06:50:26.131388] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:07.718 06:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.718 06:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:24:07.718 [2024-12-06 06:50:26.133854] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:08.654 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:08.654 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:08.654 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:08.654 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:08.654 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:08.654 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:08.654 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.654 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.654 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:08.654 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.654 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:08.654 "name": "raid_bdev1", 00:24:08.654 "uuid": "419ad046-02f1-48ec-a0dd-47e904b204ab", 00:24:08.654 "strip_size_kb": 0, 00:24:08.654 "state": "online", 00:24:08.654 "raid_level": "raid1", 00:24:08.654 "superblock": true, 00:24:08.654 "num_base_bdevs": 2, 00:24:08.654 "num_base_bdevs_discovered": 2, 00:24:08.654 "num_base_bdevs_operational": 2, 00:24:08.654 "process": { 00:24:08.654 "type": "rebuild", 00:24:08.654 "target": "spare", 00:24:08.654 "progress": { 00:24:08.654 "blocks": 2560, 00:24:08.654 "percent": 32 00:24:08.654 } 00:24:08.654 }, 00:24:08.654 "base_bdevs_list": [ 00:24:08.654 { 00:24:08.654 "name": "spare", 00:24:08.654 "uuid": "d6365da6-e24b-524e-81c3-0d70069900db", 00:24:08.654 "is_configured": true, 00:24:08.654 "data_offset": 256, 00:24:08.654 "data_size": 7936 00:24:08.654 }, 00:24:08.654 { 00:24:08.654 "name": "BaseBdev2", 00:24:08.654 "uuid": "a607b04e-12e8-5dc5-8052-0f842abc322e", 00:24:08.654 "is_configured": true, 00:24:08.654 "data_offset": 256, 00:24:08.654 "data_size": 7936 00:24:08.654 } 00:24:08.654 ] 00:24:08.654 }' 00:24:08.654 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:08.654 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:08.654 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:08.654 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:08.654 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:24:08.654 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.654 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:08.654 [2024-12-06 06:50:27.295516] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:08.912 [2024-12-06 06:50:27.343463] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:08.912 [2024-12-06 06:50:27.343577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:08.912 [2024-12-06 06:50:27.343603] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:08.912 [2024-12-06 06:50:27.343617] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:08.912 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.912 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:08.912 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:08.912 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:08.912 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:08.912 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:08.912 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:08.912 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:08.912 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:08.912 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:08.912 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:08.912 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:08.912 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.912 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:08.912 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.912 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.912 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:08.912 "name": "raid_bdev1", 00:24:08.912 "uuid": "419ad046-02f1-48ec-a0dd-47e904b204ab", 00:24:08.912 "strip_size_kb": 0, 00:24:08.912 "state": "online", 00:24:08.912 "raid_level": "raid1", 00:24:08.912 "superblock": true, 00:24:08.912 "num_base_bdevs": 2, 00:24:08.912 "num_base_bdevs_discovered": 1, 00:24:08.912 "num_base_bdevs_operational": 1, 00:24:08.912 "base_bdevs_list": [ 00:24:08.912 { 00:24:08.912 "name": null, 00:24:08.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:08.912 "is_configured": false, 00:24:08.912 "data_offset": 0, 00:24:08.912 "data_size": 7936 00:24:08.912 }, 00:24:08.912 { 00:24:08.912 "name": "BaseBdev2", 00:24:08.912 "uuid": "a607b04e-12e8-5dc5-8052-0f842abc322e", 00:24:08.912 "is_configured": true, 00:24:08.912 "data_offset": 256, 00:24:08.912 "data_size": 7936 00:24:08.912 } 00:24:08.912 ] 00:24:08.912 }' 00:24:08.912 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:08.912 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:09.251 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:09.251 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.251 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:09.251 [2024-12-06 06:50:27.851650] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:09.251 [2024-12-06 06:50:27.851740] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:09.251 [2024-12-06 06:50:27.851781] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:24:09.251 [2024-12-06 06:50:27.851800] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:09.251 [2024-12-06 06:50:27.852082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:09.251 [2024-12-06 06:50:27.852124] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:09.251 [2024-12-06 06:50:27.852205] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:09.251 [2024-12-06 06:50:27.852239] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:09.251 [2024-12-06 06:50:27.852254] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:09.251 [2024-12-06 06:50:27.852286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:09.251 [2024-12-06 06:50:27.867876] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:24:09.251 spare 00:24:09.251 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.251 06:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:24:09.251 [2024-12-06 06:50:27.870358] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:10.624 06:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:10.624 06:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:10.624 06:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:10.624 06:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:10.624 06:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:10.624 06:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:10.624 06:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.624 06:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:10.624 06:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.624 06:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.624 06:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:10.624 "name": "raid_bdev1", 00:24:10.624 "uuid": "419ad046-02f1-48ec-a0dd-47e904b204ab", 00:24:10.624 "strip_size_kb": 0, 00:24:10.624 "state": "online", 00:24:10.624 "raid_level": "raid1", 00:24:10.624 "superblock": true, 00:24:10.624 "num_base_bdevs": 2, 00:24:10.624 "num_base_bdevs_discovered": 2, 00:24:10.624 "num_base_bdevs_operational": 2, 00:24:10.624 "process": { 00:24:10.624 "type": "rebuild", 00:24:10.624 "target": "spare", 00:24:10.624 "progress": { 00:24:10.624 "blocks": 2560, 00:24:10.624 "percent": 32 00:24:10.624 } 00:24:10.624 }, 00:24:10.624 "base_bdevs_list": [ 00:24:10.624 { 00:24:10.624 "name": "spare", 00:24:10.624 "uuid": "d6365da6-e24b-524e-81c3-0d70069900db", 00:24:10.624 "is_configured": true, 00:24:10.624 "data_offset": 256, 00:24:10.624 "data_size": 7936 00:24:10.624 }, 00:24:10.624 { 00:24:10.624 "name": "BaseBdev2", 00:24:10.624 "uuid": "a607b04e-12e8-5dc5-8052-0f842abc322e", 00:24:10.624 "is_configured": true, 00:24:10.624 "data_offset": 256, 00:24:10.624 "data_size": 7936 00:24:10.624 } 00:24:10.624 ] 00:24:10.624 }' 00:24:10.624 06:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:10.624 06:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:10.624 06:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:10.624 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:10.624 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:24:10.624 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.624 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:10.624 [2024-12-06 06:50:29.031520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:10.624 [2024-12-06 06:50:29.079473] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:10.624 [2024-12-06 06:50:29.079583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:10.624 [2024-12-06 06:50:29.079613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:10.624 [2024-12-06 06:50:29.079627] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:10.624 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.624 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:10.624 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:10.624 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:10.624 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:10.624 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:10.624 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:10.624 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:10.624 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:10.624 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:10.624 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:10.624 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:10.624 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.625 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.625 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:10.625 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.625 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:10.625 "name": "raid_bdev1", 00:24:10.625 "uuid": "419ad046-02f1-48ec-a0dd-47e904b204ab", 00:24:10.625 "strip_size_kb": 0, 00:24:10.625 "state": "online", 00:24:10.625 "raid_level": "raid1", 00:24:10.625 "superblock": true, 00:24:10.625 "num_base_bdevs": 2, 00:24:10.625 "num_base_bdevs_discovered": 1, 00:24:10.625 "num_base_bdevs_operational": 1, 00:24:10.625 "base_bdevs_list": [ 00:24:10.625 { 00:24:10.625 "name": null, 00:24:10.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:10.625 "is_configured": false, 00:24:10.625 "data_offset": 0, 00:24:10.625 "data_size": 7936 00:24:10.625 }, 00:24:10.625 { 00:24:10.625 "name": "BaseBdev2", 00:24:10.625 "uuid": "a607b04e-12e8-5dc5-8052-0f842abc322e", 00:24:10.625 "is_configured": true, 00:24:10.625 "data_offset": 256, 00:24:10.625 "data_size": 7936 00:24:10.625 } 00:24:10.625 ] 00:24:10.625 }' 00:24:10.625 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:10.625 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:11.192 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:11.193 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:11.193 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:11.193 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:11.193 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:11.193 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:11.193 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.193 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.193 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:11.193 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.193 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:11.193 "name": "raid_bdev1", 00:24:11.193 "uuid": "419ad046-02f1-48ec-a0dd-47e904b204ab", 00:24:11.193 "strip_size_kb": 0, 00:24:11.193 "state": "online", 00:24:11.193 "raid_level": "raid1", 00:24:11.193 "superblock": true, 00:24:11.193 "num_base_bdevs": 2, 00:24:11.193 "num_base_bdevs_discovered": 1, 00:24:11.193 "num_base_bdevs_operational": 1, 00:24:11.193 "base_bdevs_list": [ 00:24:11.193 { 00:24:11.193 "name": null, 00:24:11.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:11.193 "is_configured": false, 00:24:11.193 "data_offset": 0, 00:24:11.193 "data_size": 7936 00:24:11.193 }, 00:24:11.193 { 00:24:11.193 "name": "BaseBdev2", 00:24:11.193 "uuid": "a607b04e-12e8-5dc5-8052-0f842abc322e", 00:24:11.193 "is_configured": true, 00:24:11.193 "data_offset": 256, 00:24:11.193 "data_size": 7936 00:24:11.193 } 00:24:11.193 ] 00:24:11.193 }' 00:24:11.193 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:11.193 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:11.193 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:11.193 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:11.193 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:24:11.193 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.193 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:11.193 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.193 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:11.193 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.193 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:11.193 [2024-12-06 06:50:29.819668] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:11.193 [2024-12-06 06:50:29.819742] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:11.193 [2024-12-06 06:50:29.819776] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:24:11.193 [2024-12-06 06:50:29.819791] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:11.193 [2024-12-06 06:50:29.820064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:11.193 [2024-12-06 06:50:29.820089] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:11.193 [2024-12-06 06:50:29.820158] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:11.193 [2024-12-06 06:50:29.820186] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:11.193 [2024-12-06 06:50:29.820200] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:11.193 [2024-12-06 06:50:29.820230] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:24:11.193 BaseBdev1 00:24:11.193 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.193 06:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:24:12.571 06:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:12.571 06:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:12.571 06:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:12.571 06:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:12.571 06:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:12.571 06:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:12.571 06:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:12.571 06:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:12.571 06:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:12.571 06:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:12.572 06:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:12.572 06:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.572 06:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.572 06:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:12.572 06:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.572 06:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:12.572 "name": "raid_bdev1", 00:24:12.572 "uuid": "419ad046-02f1-48ec-a0dd-47e904b204ab", 00:24:12.572 "strip_size_kb": 0, 00:24:12.572 "state": "online", 00:24:12.572 "raid_level": "raid1", 00:24:12.572 "superblock": true, 00:24:12.572 "num_base_bdevs": 2, 00:24:12.572 "num_base_bdevs_discovered": 1, 00:24:12.572 "num_base_bdevs_operational": 1, 00:24:12.572 "base_bdevs_list": [ 00:24:12.572 { 00:24:12.572 "name": null, 00:24:12.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.572 "is_configured": false, 00:24:12.572 "data_offset": 0, 00:24:12.572 "data_size": 7936 00:24:12.572 }, 00:24:12.572 { 00:24:12.572 "name": "BaseBdev2", 00:24:12.572 "uuid": "a607b04e-12e8-5dc5-8052-0f842abc322e", 00:24:12.572 "is_configured": true, 00:24:12.572 "data_offset": 256, 00:24:12.572 "data_size": 7936 00:24:12.572 } 00:24:12.572 ] 00:24:12.572 }' 00:24:12.572 06:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:12.572 06:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:12.831 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:12.831 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:12.831 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:12.831 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:12.831 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:12.831 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:12.831 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.831 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:12.831 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.831 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.831 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:12.831 "name": "raid_bdev1", 00:24:12.831 "uuid": "419ad046-02f1-48ec-a0dd-47e904b204ab", 00:24:12.831 "strip_size_kb": 0, 00:24:12.831 "state": "online", 00:24:12.831 "raid_level": "raid1", 00:24:12.831 "superblock": true, 00:24:12.831 "num_base_bdevs": 2, 00:24:12.831 "num_base_bdevs_discovered": 1, 00:24:12.831 "num_base_bdevs_operational": 1, 00:24:12.831 "base_bdevs_list": [ 00:24:12.831 { 00:24:12.831 "name": null, 00:24:12.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.831 "is_configured": false, 00:24:12.831 "data_offset": 0, 00:24:12.831 "data_size": 7936 00:24:12.831 }, 00:24:12.831 { 00:24:12.831 "name": "BaseBdev2", 00:24:12.831 "uuid": "a607b04e-12e8-5dc5-8052-0f842abc322e", 00:24:12.831 "is_configured": true, 00:24:12.831 "data_offset": 256, 00:24:12.831 "data_size": 7936 00:24:12.831 } 00:24:12.831 ] 00:24:12.831 }' 00:24:12.831 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:13.090 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:13.090 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:13.090 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:13.090 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:13.090 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:24:13.090 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:13.090 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:13.090 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:13.090 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:13.090 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:13.090 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:13.090 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.090 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:13.090 [2024-12-06 06:50:31.564949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:13.090 [2024-12-06 06:50:31.565244] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:13.090 [2024-12-06 06:50:31.565275] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:13.090 request: 00:24:13.090 { 00:24:13.090 "base_bdev": "BaseBdev1", 00:24:13.090 "raid_bdev": "raid_bdev1", 00:24:13.090 "method": "bdev_raid_add_base_bdev", 00:24:13.090 "req_id": 1 00:24:13.090 } 00:24:13.090 Got JSON-RPC error response 00:24:13.090 response: 00:24:13.090 { 00:24:13.090 "code": -22, 00:24:13.090 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:13.090 } 00:24:13.090 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:13.090 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:24:13.090 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:13.090 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:13.090 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:13.090 06:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:24:14.073 06:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:14.073 06:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:14.073 06:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:14.073 06:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:14.073 06:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:14.073 06:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:14.073 06:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:14.073 06:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:14.073 06:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:14.073 06:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:14.073 06:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:14.073 06:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.073 06:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.073 06:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:14.073 06:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.073 06:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:14.073 "name": "raid_bdev1", 00:24:14.073 "uuid": "419ad046-02f1-48ec-a0dd-47e904b204ab", 00:24:14.073 "strip_size_kb": 0, 00:24:14.073 "state": "online", 00:24:14.073 "raid_level": "raid1", 00:24:14.073 "superblock": true, 00:24:14.073 "num_base_bdevs": 2, 00:24:14.073 "num_base_bdevs_discovered": 1, 00:24:14.073 "num_base_bdevs_operational": 1, 00:24:14.073 "base_bdevs_list": [ 00:24:14.073 { 00:24:14.073 "name": null, 00:24:14.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.073 "is_configured": false, 00:24:14.073 "data_offset": 0, 00:24:14.073 "data_size": 7936 00:24:14.073 }, 00:24:14.073 { 00:24:14.073 "name": "BaseBdev2", 00:24:14.073 "uuid": "a607b04e-12e8-5dc5-8052-0f842abc322e", 00:24:14.073 "is_configured": true, 00:24:14.073 "data_offset": 256, 00:24:14.073 "data_size": 7936 00:24:14.073 } 00:24:14.073 ] 00:24:14.073 }' 00:24:14.073 06:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:14.073 06:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:14.639 06:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:14.639 06:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:14.639 06:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:14.639 06:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:14.639 06:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:14.639 06:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.639 06:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:14.639 06:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.639 06:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:14.639 06:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.639 06:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:14.639 "name": "raid_bdev1", 00:24:14.639 "uuid": "419ad046-02f1-48ec-a0dd-47e904b204ab", 00:24:14.639 "strip_size_kb": 0, 00:24:14.639 "state": "online", 00:24:14.639 "raid_level": "raid1", 00:24:14.639 "superblock": true, 00:24:14.639 "num_base_bdevs": 2, 00:24:14.639 "num_base_bdevs_discovered": 1, 00:24:14.639 "num_base_bdevs_operational": 1, 00:24:14.639 "base_bdevs_list": [ 00:24:14.639 { 00:24:14.639 "name": null, 00:24:14.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.639 "is_configured": false, 00:24:14.639 "data_offset": 0, 00:24:14.639 "data_size": 7936 00:24:14.639 }, 00:24:14.639 { 00:24:14.639 "name": "BaseBdev2", 00:24:14.639 "uuid": "a607b04e-12e8-5dc5-8052-0f842abc322e", 00:24:14.639 "is_configured": true, 00:24:14.639 "data_offset": 256, 00:24:14.639 "data_size": 7936 00:24:14.639 } 00:24:14.639 ] 00:24:14.639 }' 00:24:14.639 06:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:14.639 06:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:14.639 06:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:14.898 06:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:14.898 06:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89690 00:24:14.898 06:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89690 ']' 00:24:14.898 06:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89690 00:24:14.898 06:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:24:14.898 06:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:14.898 06:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89690 00:24:14.898 06:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:14.898 06:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:14.898 killing process with pid 89690 00:24:14.898 06:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89690' 00:24:14.898 06:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89690 00:24:14.898 Received shutdown signal, test time was about 60.000000 seconds 00:24:14.898 00:24:14.898 Latency(us) 00:24:14.898 [2024-12-06T06:50:33.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.898 [2024-12-06T06:50:33.545Z] =================================================================================================================== 00:24:14.898 [2024-12-06T06:50:33.545Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:14.898 [2024-12-06 06:50:33.340790] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:14.898 06:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89690 00:24:14.898 [2024-12-06 06:50:33.341056] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:14.898 [2024-12-06 06:50:33.341148] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:14.898 [2024-12-06 06:50:33.341171] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:24:15.156 [2024-12-06 06:50:33.633759] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:16.533 06:50:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:24:16.533 00:24:16.533 real 0m18.740s 00:24:16.533 user 0m25.533s 00:24:16.533 sys 0m1.469s 00:24:16.533 06:50:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:16.533 06:50:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:16.533 ************************************ 00:24:16.533 END TEST raid_rebuild_test_sb_md_interleaved 00:24:16.533 ************************************ 00:24:16.533 06:50:34 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:24:16.533 06:50:34 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:24:16.533 06:50:34 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89690 ']' 00:24:16.533 06:50:34 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89690 00:24:16.533 06:50:34 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:24:16.533 00:24:16.533 real 13m4.711s 00:24:16.533 user 18m27.436s 00:24:16.533 sys 1m45.906s 00:24:16.533 06:50:34 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:16.533 06:50:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:16.533 ************************************ 00:24:16.533 END TEST bdev_raid 00:24:16.533 ************************************ 00:24:16.533 06:50:34 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:24:16.533 06:50:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:16.533 06:50:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:16.533 06:50:34 -- common/autotest_common.sh@10 -- # set +x 00:24:16.533 ************************************ 00:24:16.533 START TEST spdkcli_raid 00:24:16.533 ************************************ 00:24:16.533 06:50:34 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:24:16.533 * Looking for test storage... 00:24:16.533 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:24:16.533 06:50:34 spdkcli_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:16.533 06:50:34 spdkcli_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:24:16.533 06:50:34 spdkcli_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:16.533 06:50:35 spdkcli_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:16.533 06:50:35 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:16.533 06:50:35 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:16.533 06:50:35 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:16.533 06:50:35 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:24:16.533 06:50:35 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:24:16.533 06:50:35 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:24:16.533 06:50:35 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:24:16.533 06:50:35 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:24:16.533 06:50:35 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:24:16.533 06:50:35 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:24:16.533 06:50:35 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:16.533 06:50:35 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:24:16.533 06:50:35 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:24:16.533 06:50:35 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:16.533 06:50:35 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:16.533 06:50:35 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:24:16.534 06:50:35 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:24:16.534 06:50:35 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:16.534 06:50:35 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:24:16.534 06:50:35 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:24:16.534 06:50:35 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:24:16.534 06:50:35 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:24:16.534 06:50:35 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:16.534 06:50:35 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:24:16.534 06:50:35 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:24:16.534 06:50:35 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:16.534 06:50:35 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:16.534 06:50:35 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:24:16.534 06:50:35 spdkcli_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:16.534 06:50:35 spdkcli_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:16.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.534 --rc genhtml_branch_coverage=1 00:24:16.534 --rc genhtml_function_coverage=1 00:24:16.534 --rc genhtml_legend=1 00:24:16.534 --rc geninfo_all_blocks=1 00:24:16.534 --rc geninfo_unexecuted_blocks=1 00:24:16.534 00:24:16.534 ' 00:24:16.534 06:50:35 spdkcli_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:16.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.534 --rc genhtml_branch_coverage=1 00:24:16.534 --rc genhtml_function_coverage=1 00:24:16.534 --rc genhtml_legend=1 00:24:16.534 --rc geninfo_all_blocks=1 00:24:16.534 --rc geninfo_unexecuted_blocks=1 00:24:16.534 00:24:16.534 ' 00:24:16.534 06:50:35 spdkcli_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:16.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.534 --rc genhtml_branch_coverage=1 00:24:16.534 --rc genhtml_function_coverage=1 00:24:16.534 --rc genhtml_legend=1 00:24:16.534 --rc geninfo_all_blocks=1 00:24:16.534 --rc geninfo_unexecuted_blocks=1 00:24:16.534 00:24:16.534 ' 00:24:16.534 06:50:35 spdkcli_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:16.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.534 --rc genhtml_branch_coverage=1 00:24:16.534 --rc genhtml_function_coverage=1 00:24:16.534 --rc genhtml_legend=1 00:24:16.534 --rc geninfo_all_blocks=1 00:24:16.534 --rc geninfo_unexecuted_blocks=1 00:24:16.534 00:24:16.534 ' 00:24:16.534 06:50:35 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:24:16.534 06:50:35 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:24:16.534 06:50:35 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:24:16.534 06:50:35 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:24:16.534 06:50:35 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:24:16.534 06:50:35 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:24:16.534 06:50:35 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:24:16.534 06:50:35 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:24:16.534 06:50:35 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:24:16.534 06:50:35 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:24:16.534 06:50:35 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:24:16.534 06:50:35 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:24:16.534 06:50:35 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:24:16.534 06:50:35 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:24:16.534 06:50:35 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:24:16.534 06:50:35 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:24:16.534 06:50:35 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:24:16.534 06:50:35 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:24:16.534 06:50:35 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:24:16.534 06:50:35 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:24:16.534 06:50:35 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:24:16.534 06:50:35 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:24:16.534 06:50:35 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:24:16.534 06:50:35 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:24:16.534 06:50:35 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:24:16.534 06:50:35 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:24:16.534 06:50:35 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:24:16.534 06:50:35 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:24:16.534 06:50:35 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:24:16.534 06:50:35 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:24:16.534 06:50:35 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:24:16.534 06:50:35 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:24:16.534 06:50:35 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:24:16.534 06:50:35 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:16.534 06:50:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:16.534 06:50:35 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:24:16.534 06:50:35 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90372 00:24:16.534 06:50:35 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90372 00:24:16.534 06:50:35 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:24:16.534 06:50:35 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90372 ']' 00:24:16.534 06:50:35 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.534 06:50:35 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:16.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.534 06:50:35 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.534 06:50:35 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:16.534 06:50:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:16.794 [2024-12-06 06:50:35.191942] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:24:16.794 [2024-12-06 06:50:35.192821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90372 ] 00:24:16.794 [2024-12-06 06:50:35.384450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:17.051 [2024-12-06 06:50:35.523080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.051 [2024-12-06 06:50:35.523094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.987 06:50:36 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:17.987 06:50:36 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:24:17.987 06:50:36 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:24:17.987 06:50:36 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:17.987 06:50:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:17.987 06:50:36 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:24:17.987 06:50:36 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:17.987 06:50:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:17.987 06:50:36 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:17.987 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:17.987 ' 00:24:19.386 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:24:19.386 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:24:19.646 06:50:38 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:24:19.646 06:50:38 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:19.646 06:50:38 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:19.646 06:50:38 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:24:19.646 06:50:38 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:19.646 06:50:38 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:19.646 06:50:38 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:24:19.646 ' 00:24:21.025 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:24:21.025 06:50:39 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:24:21.025 06:50:39 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:21.025 06:50:39 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:21.025 06:50:39 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:24:21.025 06:50:39 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:21.025 06:50:39 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:21.025 06:50:39 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:24:21.025 06:50:39 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:24:21.593 06:50:40 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:24:21.593 06:50:40 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:24:21.593 06:50:40 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:24:21.593 06:50:40 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:21.593 06:50:40 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:21.593 06:50:40 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:24:21.593 06:50:40 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:21.593 06:50:40 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:21.593 06:50:40 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:24:21.593 ' 00:24:22.528 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:24:22.789 06:50:41 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:24:22.789 06:50:41 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:22.789 06:50:41 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:22.789 06:50:41 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:24:22.789 06:50:41 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:22.789 06:50:41 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:22.789 06:50:41 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:24:22.789 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:24:22.789 ' 00:24:24.167 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:24:24.167 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:24:24.425 06:50:42 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:24:24.425 06:50:42 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:24.425 06:50:42 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:24.425 06:50:42 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90372 00:24:24.425 06:50:42 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90372 ']' 00:24:24.425 06:50:42 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90372 00:24:24.425 06:50:42 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:24:24.425 06:50:42 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:24.425 06:50:42 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90372 00:24:24.425 06:50:42 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:24.425 06:50:42 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:24.426 killing process with pid 90372 00:24:24.426 06:50:42 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90372' 00:24:24.426 06:50:42 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90372 00:24:24.426 06:50:42 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90372 00:24:26.956 06:50:45 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:24:26.956 06:50:45 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90372 ']' 00:24:26.956 06:50:45 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90372 00:24:26.956 06:50:45 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90372 ']' 00:24:26.956 06:50:45 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90372 00:24:26.956 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90372) - No such process 00:24:26.956 Process with pid 90372 is not found 00:24:26.956 06:50:45 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90372 is not found' 00:24:26.956 06:50:45 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:24:26.956 06:50:45 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:24:26.956 06:50:45 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:24:26.956 06:50:45 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:24:26.956 00:24:26.956 real 0m10.550s 00:24:26.956 user 0m22.001s 00:24:26.956 sys 0m1.139s 00:24:26.956 06:50:45 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:26.956 06:50:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:26.956 ************************************ 00:24:26.956 END TEST spdkcli_raid 00:24:26.956 ************************************ 00:24:26.956 06:50:45 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:24:26.956 06:50:45 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:26.956 06:50:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:26.956 06:50:45 -- common/autotest_common.sh@10 -- # set +x 00:24:26.956 ************************************ 00:24:26.956 START TEST blockdev_raid5f 00:24:26.956 ************************************ 00:24:26.956 06:50:45 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:24:26.956 * Looking for test storage... 00:24:26.956 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:24:26.956 06:50:45 blockdev_raid5f -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:26.956 06:50:45 blockdev_raid5f -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:26.956 06:50:45 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lcov --version 00:24:27.213 06:50:45 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:27.213 06:50:45 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:27.213 06:50:45 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:27.213 06:50:45 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:27.213 06:50:45 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:24:27.213 06:50:45 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:24:27.213 06:50:45 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:24:27.213 06:50:45 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:24:27.213 06:50:45 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:24:27.213 06:50:45 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:24:27.213 06:50:45 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:24:27.213 06:50:45 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:27.213 06:50:45 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:24:27.213 06:50:45 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:24:27.213 06:50:45 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:27.213 06:50:45 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:27.213 06:50:45 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:24:27.214 06:50:45 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:24:27.214 06:50:45 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:27.214 06:50:45 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:24:27.214 06:50:45 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:24:27.214 06:50:45 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:24:27.214 06:50:45 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:24:27.214 06:50:45 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:27.214 06:50:45 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:24:27.214 06:50:45 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:24:27.214 06:50:45 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:27.214 06:50:45 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:27.214 06:50:45 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:24:27.214 06:50:45 blockdev_raid5f -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:27.214 06:50:45 blockdev_raid5f -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:27.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.214 --rc genhtml_branch_coverage=1 00:24:27.214 --rc genhtml_function_coverage=1 00:24:27.214 --rc genhtml_legend=1 00:24:27.214 --rc geninfo_all_blocks=1 00:24:27.214 --rc geninfo_unexecuted_blocks=1 00:24:27.214 00:24:27.214 ' 00:24:27.214 06:50:45 blockdev_raid5f -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:27.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.214 --rc genhtml_branch_coverage=1 00:24:27.214 --rc genhtml_function_coverage=1 00:24:27.214 --rc genhtml_legend=1 00:24:27.214 --rc geninfo_all_blocks=1 00:24:27.214 --rc geninfo_unexecuted_blocks=1 00:24:27.214 00:24:27.214 ' 00:24:27.214 06:50:45 blockdev_raid5f -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:27.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.214 --rc genhtml_branch_coverage=1 00:24:27.214 --rc genhtml_function_coverage=1 00:24:27.214 --rc genhtml_legend=1 00:24:27.214 --rc geninfo_all_blocks=1 00:24:27.214 --rc geninfo_unexecuted_blocks=1 00:24:27.214 00:24:27.214 ' 00:24:27.214 06:50:45 blockdev_raid5f -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:27.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.214 --rc genhtml_branch_coverage=1 00:24:27.214 --rc genhtml_function_coverage=1 00:24:27.214 --rc genhtml_legend=1 00:24:27.214 --rc geninfo_all_blocks=1 00:24:27.214 --rc geninfo_unexecuted_blocks=1 00:24:27.214 00:24:27.214 ' 00:24:27.214 06:50:45 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:24:27.214 06:50:45 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:24:27.214 06:50:45 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:24:27.214 06:50:45 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:27.214 06:50:45 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:24:27.214 06:50:45 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:24:27.214 06:50:45 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:24:27.214 06:50:45 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:24:27.214 06:50:45 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:24:27.214 06:50:45 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:24:27.214 06:50:45 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:24:27.214 06:50:45 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:24:27.214 06:50:45 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:24:27.214 06:50:45 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:24:27.214 06:50:45 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:24:27.214 06:50:45 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:24:27.214 06:50:45 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:24:27.214 06:50:45 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:24:27.214 06:50:45 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:24:27.214 06:50:45 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:24:27.214 06:50:45 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:24:27.214 06:50:45 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:24:27.214 06:50:45 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:24:27.214 06:50:45 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:24:27.214 06:50:45 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90658 00:24:27.214 06:50:45 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:24:27.214 06:50:45 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90658 00:24:27.214 06:50:45 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:24:27.214 06:50:45 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90658 ']' 00:24:27.214 06:50:45 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.214 06:50:45 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:27.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.214 06:50:45 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.214 06:50:45 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:27.214 06:50:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:27.214 [2024-12-06 06:50:45.791147] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:24:27.214 [2024-12-06 06:50:45.791344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90658 ] 00:24:27.472 [2024-12-06 06:50:45.976503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.472 [2024-12-06 06:50:46.105134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.405 06:50:46 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:28.405 06:50:46 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:24:28.405 06:50:46 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:24:28.405 06:50:46 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:24:28.405 06:50:46 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:24:28.405 06:50:46 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.405 06:50:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:28.405 Malloc0 00:24:28.663 Malloc1 00:24:28.663 Malloc2 00:24:28.663 06:50:47 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.663 06:50:47 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:24:28.663 06:50:47 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.663 06:50:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:28.663 06:50:47 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.663 06:50:47 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:24:28.663 06:50:47 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:24:28.663 06:50:47 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.663 06:50:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:28.663 06:50:47 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.663 06:50:47 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:24:28.663 06:50:47 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.663 06:50:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:28.663 06:50:47 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.663 06:50:47 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:24:28.663 06:50:47 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.663 06:50:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:28.663 06:50:47 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.663 06:50:47 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:24:28.663 06:50:47 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:24:28.663 06:50:47 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.663 06:50:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:28.663 06:50:47 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:24:28.663 06:50:47 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.663 06:50:47 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:24:28.663 06:50:47 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "13831e3d-fcb8-4107-a927-ff67df026812"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "13831e3d-fcb8-4107-a927-ff67df026812",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "13831e3d-fcb8-4107-a927-ff67df026812",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "c923ddce-e4ec-4302-95cd-31b8a393072a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "4f7eb07d-dc9f-4b0d-9082-10cf37381fed",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "a311010a-88eb-4fa9-9c6a-5550dc8487af",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:24:28.663 06:50:47 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:24:28.663 06:50:47 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:24:28.663 06:50:47 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:24:28.663 06:50:47 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:24:28.663 06:50:47 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 90658 00:24:28.663 06:50:47 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90658 ']' 00:24:28.663 06:50:47 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90658 00:24:28.663 06:50:47 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:24:28.663 06:50:47 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:28.663 06:50:47 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90658 00:24:28.921 06:50:47 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:28.921 killing process with pid 90658 00:24:28.921 06:50:47 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:28.921 06:50:47 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90658' 00:24:28.921 06:50:47 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90658 00:24:28.921 06:50:47 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90658 00:24:31.520 06:50:49 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:31.520 06:50:49 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:24:31.520 06:50:49 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:24:31.520 06:50:49 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:31.521 06:50:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:31.521 ************************************ 00:24:31.521 START TEST bdev_hello_world 00:24:31.521 ************************************ 00:24:31.521 06:50:49 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:24:31.521 [2024-12-06 06:50:49.948905] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:24:31.521 [2024-12-06 06:50:49.949094] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90724 ] 00:24:31.521 [2024-12-06 06:50:50.135083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.780 [2024-12-06 06:50:50.266168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.348 [2024-12-06 06:50:50.832458] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:24:32.348 [2024-12-06 06:50:50.832539] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:24:32.348 [2024-12-06 06:50:50.832567] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:24:32.348 [2024-12-06 06:50:50.833131] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:24:32.348 [2024-12-06 06:50:50.833311] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:24:32.348 [2024-12-06 06:50:50.833345] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:24:32.348 [2024-12-06 06:50:50.833418] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:24:32.348 00:24:32.348 [2024-12-06 06:50:50.833448] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:24:33.745 00:24:33.745 real 0m2.325s 00:24:33.745 user 0m1.868s 00:24:33.745 sys 0m0.330s 00:24:33.745 06:50:52 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:33.745 06:50:52 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:24:33.745 ************************************ 00:24:33.745 END TEST bdev_hello_world 00:24:33.745 ************************************ 00:24:33.745 06:50:52 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:24:33.745 06:50:52 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:33.745 06:50:52 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:33.745 06:50:52 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:33.745 ************************************ 00:24:33.745 START TEST bdev_bounds 00:24:33.745 ************************************ 00:24:33.745 06:50:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:24:33.745 06:50:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90768 00:24:33.745 06:50:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:24:33.745 06:50:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:24:33.745 06:50:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90768' 00:24:33.745 Process bdevio pid: 90768 00:24:33.745 06:50:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90768 00:24:33.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.745 06:50:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90768 ']' 00:24:33.745 06:50:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.745 06:50:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:33.745 06:50:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.745 06:50:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:33.745 06:50:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:24:33.745 [2024-12-06 06:50:52.323541] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:24:33.745 [2024-12-06 06:50:52.323719] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90768 ] 00:24:34.004 [2024-12-06 06:50:52.506644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:34.262 [2024-12-06 06:50:52.658580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.262 [2024-12-06 06:50:52.658707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.262 [2024-12-06 06:50:52.658719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.832 06:50:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:34.832 06:50:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:24:34.832 06:50:53 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:24:34.832 I/O targets: 00:24:34.832 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:24:34.832 00:24:34.832 00:24:34.832 CUnit - A unit testing framework for C - Version 2.1-3 00:24:34.832 http://cunit.sourceforge.net/ 00:24:34.832 00:24:34.832 00:24:34.832 Suite: bdevio tests on: raid5f 00:24:34.832 Test: blockdev write read block ...passed 00:24:34.832 Test: blockdev write zeroes read block ...passed 00:24:34.832 Test: blockdev write zeroes read no split ...passed 00:24:35.090 Test: blockdev write zeroes read split ...passed 00:24:35.090 Test: blockdev write zeroes read split partial ...passed 00:24:35.090 Test: blockdev reset ...passed 00:24:35.090 Test: blockdev write read 8 blocks ...passed 00:24:35.090 Test: blockdev write read size > 128k ...passed 00:24:35.090 Test: blockdev write read invalid size ...passed 00:24:35.090 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:35.091 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:35.091 Test: blockdev write read max offset ...passed 00:24:35.091 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:35.091 Test: blockdev writev readv 8 blocks ...passed 00:24:35.091 Test: blockdev writev readv 30 x 1block ...passed 00:24:35.091 Test: blockdev writev readv block ...passed 00:24:35.091 Test: blockdev writev readv size > 128k ...passed 00:24:35.091 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:35.091 Test: blockdev comparev and writev ...passed 00:24:35.091 Test: blockdev nvme passthru rw ...passed 00:24:35.091 Test: blockdev nvme passthru vendor specific ...passed 00:24:35.091 Test: blockdev nvme admin passthru ...passed 00:24:35.091 Test: blockdev copy ...passed 00:24:35.091 00:24:35.091 Run Summary: Type Total Ran Passed Failed Inactive 00:24:35.091 suites 1 1 n/a 0 0 00:24:35.091 tests 23 23 23 0 0 00:24:35.091 asserts 130 130 130 0 n/a 00:24:35.091 00:24:35.091 Elapsed time = 0.570 seconds 00:24:35.091 0 00:24:35.091 06:50:53 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90768 00:24:35.091 06:50:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90768 ']' 00:24:35.091 06:50:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90768 00:24:35.091 06:50:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:24:35.091 06:50:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:35.091 06:50:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90768 00:24:35.091 06:50:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:35.091 06:50:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:35.091 06:50:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90768' 00:24:35.091 killing process with pid 90768 00:24:35.091 06:50:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90768 00:24:35.091 06:50:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90768 00:24:36.466 06:50:55 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:24:36.466 00:24:36.466 real 0m2.858s 00:24:36.466 user 0m7.059s 00:24:36.466 sys 0m0.456s 00:24:36.466 06:50:55 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:36.466 06:50:55 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:24:36.466 ************************************ 00:24:36.466 END TEST bdev_bounds 00:24:36.466 ************************************ 00:24:36.725 06:50:55 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:24:36.725 06:50:55 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:36.725 06:50:55 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:36.725 06:50:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:36.725 ************************************ 00:24:36.725 START TEST bdev_nbd 00:24:36.725 ************************************ 00:24:36.725 06:50:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:24:36.725 06:50:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:24:36.725 06:50:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:24:36.725 06:50:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:36.725 06:50:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:36.725 06:50:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:24:36.725 06:50:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:24:36.725 06:50:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:24:36.725 06:50:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:24:36.725 06:50:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:24:36.725 06:50:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:24:36.725 06:50:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:24:36.725 06:50:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:24:36.725 06:50:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:24:36.725 06:50:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:24:36.725 06:50:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:24:36.725 06:50:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90829 00:24:36.725 06:50:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:24:36.725 06:50:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90829 /var/tmp/spdk-nbd.sock 00:24:36.725 06:50:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90829 ']' 00:24:36.725 06:50:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:24:36.725 06:50:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:36.725 06:50:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:24:36.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:24:36.725 06:50:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:36.725 06:50:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:24:36.725 06:50:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:24:36.725 [2024-12-06 06:50:55.228408] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:24:36.725 [2024-12-06 06:50:55.228583] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:36.983 [2024-12-06 06:50:55.409382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.983 [2024-12-06 06:50:55.543742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:37.934 1+0 records in 00:24:37.934 1+0 records out 00:24:37.934 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289678 s, 14.1 MB/s 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:24:37.934 06:50:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:38.503 06:50:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:24:38.503 { 00:24:38.503 "nbd_device": "/dev/nbd0", 00:24:38.503 "bdev_name": "raid5f" 00:24:38.503 } 00:24:38.503 ]' 00:24:38.503 06:50:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:24:38.503 06:50:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:24:38.503 { 00:24:38.503 "nbd_device": "/dev/nbd0", 00:24:38.503 "bdev_name": "raid5f" 00:24:38.503 } 00:24:38.503 ]' 00:24:38.503 06:50:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:24:38.503 06:50:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:38.503 06:50:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:38.503 06:50:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:38.503 06:50:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:38.503 06:50:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:24:38.503 06:50:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:38.503 06:50:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:38.503 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:38.763 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:38.763 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:38.763 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:38.763 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:38.763 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:38.763 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:38.763 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:38.763 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:38.763 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:38.763 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:39.022 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:39.022 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:39.022 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:39.022 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:39.022 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:24:39.022 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:39.022 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:24:39.022 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:24:39.022 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:24:39.022 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:24:39.022 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:24:39.022 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:24:39.022 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:24:39.022 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:39.022 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:24:39.022 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:24:39.022 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:24:39.022 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:24:39.022 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:24:39.022 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:39.022 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:24:39.022 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:39.022 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:39.022 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:39.022 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:24:39.022 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:39.022 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:39.022 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:24:39.282 /dev/nbd0 00:24:39.282 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:39.282 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:39.282 06:50:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:39.282 06:50:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:39.282 06:50:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:39.282 06:50:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:39.282 06:50:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:39.282 06:50:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:39.282 06:50:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:39.282 06:50:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:39.282 06:50:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:39.282 1+0 records in 00:24:39.282 1+0 records out 00:24:39.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390853 s, 10.5 MB/s 00:24:39.282 06:50:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:39.282 06:50:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:39.282 06:50:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:39.282 06:50:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:39.282 06:50:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:39.282 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:39.282 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:39.282 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:39.282 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:39.282 06:50:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:39.542 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:24:39.542 { 00:24:39.542 "nbd_device": "/dev/nbd0", 00:24:39.542 "bdev_name": "raid5f" 00:24:39.542 } 00:24:39.542 ]' 00:24:39.542 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:24:39.542 { 00:24:39.542 "nbd_device": "/dev/nbd0", 00:24:39.542 "bdev_name": "raid5f" 00:24:39.542 } 00:24:39.542 ]' 00:24:39.542 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:39.542 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:24:39.542 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:24:39.542 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:39.542 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:24:39.542 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:24:39.542 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:24:39.542 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:24:39.542 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:24:39.542 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:24:39.542 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:39.542 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:24:39.542 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:39.542 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:24:39.542 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:24:39.542 256+0 records in 00:24:39.542 256+0 records out 00:24:39.542 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0098637 s, 106 MB/s 00:24:39.542 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:39.542 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:24:39.804 256+0 records in 00:24:39.804 256+0 records out 00:24:39.804 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0389269 s, 26.9 MB/s 00:24:39.804 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:24:39.804 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:24:39.804 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:39.804 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:24:39.804 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:39.804 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:24:39.804 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:24:39.804 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:39.804 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:24:39.804 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:39.804 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:39.804 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:39.804 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:39.804 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:39.804 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:24:39.804 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:39.804 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:40.064 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:40.064 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:40.064 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:40.064 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:40.064 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:40.064 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:40.064 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:40.064 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:40.064 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:40.064 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:40.064 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:40.324 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:40.324 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:40.324 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:40.324 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:40.324 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:24:40.324 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:40.324 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:24:40.324 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:24:40.324 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:24:40.324 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:24:40.324 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:24:40.324 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:24:40.324 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:40.324 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:40.324 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:24:40.324 06:50:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:24:40.583 malloc_lvol_verify 00:24:40.583 06:50:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:24:40.842 82b2cfc1-7115-432e-a806-d0a0271f63b9 00:24:41.102 06:50:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:24:41.362 4867c168-5970-4a5a-a41e-5dba6356f50a 00:24:41.362 06:50:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:24:41.621 /dev/nbd0 00:24:41.621 06:51:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:24:41.621 06:51:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:24:41.621 06:51:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:24:41.621 06:51:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:24:41.621 06:51:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:24:41.621 mke2fs 1.47.0 (5-Feb-2023) 00:24:41.621 Discarding device blocks: 0/4096 done 00:24:41.621 Creating filesystem with 4096 1k blocks and 1024 inodes 00:24:41.621 00:24:41.621 Allocating group tables: 0/1 done 00:24:41.621 Writing inode tables: 0/1 done 00:24:41.621 Creating journal (1024 blocks): done 00:24:41.621 Writing superblocks and filesystem accounting information: 0/1 done 00:24:41.621 00:24:41.621 06:51:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:41.621 06:51:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:41.621 06:51:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:41.621 06:51:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:41.621 06:51:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:24:41.621 06:51:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:41.621 06:51:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:41.880 06:51:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:41.880 06:51:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:41.880 06:51:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:41.880 06:51:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:41.880 06:51:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:41.880 06:51:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:41.880 06:51:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:41.880 06:51:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:41.881 06:51:00 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90829 00:24:41.881 06:51:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90829 ']' 00:24:41.881 06:51:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90829 00:24:41.881 06:51:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:24:41.881 06:51:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:41.881 06:51:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90829 00:24:41.881 06:51:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:41.881 06:51:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:41.881 killing process with pid 90829 00:24:41.881 06:51:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90829' 00:24:41.881 06:51:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90829 00:24:41.881 06:51:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90829 00:24:43.288 06:51:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:24:43.289 00:24:43.289 real 0m6.673s 00:24:43.289 user 0m9.675s 00:24:43.289 sys 0m1.355s 00:24:43.289 06:51:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:43.289 06:51:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:24:43.289 ************************************ 00:24:43.289 END TEST bdev_nbd 00:24:43.289 ************************************ 00:24:43.289 06:51:01 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:24:43.289 06:51:01 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:24:43.289 06:51:01 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:24:43.289 06:51:01 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:24:43.289 06:51:01 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:43.289 06:51:01 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:43.289 06:51:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:43.289 ************************************ 00:24:43.289 START TEST bdev_fio 00:24:43.289 ************************************ 00:24:43.289 06:51:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:24:43.289 06:51:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:24:43.289 06:51:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:24:43.289 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:24:43.289 06:51:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:24:43.289 06:51:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:24:43.289 06:51:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:24:43.289 06:51:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:24:43.289 06:51:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:24:43.289 06:51:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:43.289 06:51:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:24:43.289 06:51:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:24:43.289 06:51:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:24:43.289 06:51:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:24:43.289 06:51:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:24:43.289 06:51:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:24:43.289 06:51:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:24:43.289 06:51:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:43.289 06:51:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:24:43.289 06:51:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:24:43.289 06:51:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:24:43.289 06:51:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:24:43.289 06:51:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:24:43.547 06:51:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:24:43.547 06:51:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:24:43.547 06:51:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:24:43.547 06:51:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:24:43.547 06:51:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:24:43.547 06:51:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:24:43.547 06:51:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:43.547 06:51:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:24:43.547 06:51:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:43.547 06:51:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:24:43.547 ************************************ 00:24:43.547 START TEST bdev_fio_rw_verify 00:24:43.547 ************************************ 00:24:43.547 06:51:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:43.547 06:51:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:43.547 06:51:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:43.547 06:51:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:43.547 06:51:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:43.547 06:51:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:43.547 06:51:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:24:43.547 06:51:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:43.547 06:51:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:43.547 06:51:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:43.547 06:51:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:43.547 06:51:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:24:43.547 06:51:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:43.547 06:51:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:43.547 06:51:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:24:43.547 06:51:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:43.547 06:51:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:43.805 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:24:43.805 fio-3.35 00:24:43.805 Starting 1 thread 00:24:56.008 00:24:56.008 job_raid5f: (groupid=0, jobs=1): err= 0: pid=91044: Fri Dec 6 06:51:13 2024 00:24:56.008 read: IOPS=8086, BW=31.6MiB/s (33.1MB/s)(316MiB/10001msec) 00:24:56.008 slat (usec): min=23, max=108, avg=31.07, stdev= 5.44 00:24:56.008 clat (usec): min=14, max=527, avg=197.48, stdev=75.32 00:24:56.008 lat (usec): min=42, max=572, avg=228.55, stdev=76.46 00:24:56.008 clat percentiles (usec): 00:24:56.008 | 50.000th=[ 198], 99.000th=[ 371], 99.900th=[ 429], 99.990th=[ 482], 00:24:56.008 | 99.999th=[ 529] 00:24:56.008 write: IOPS=8431, BW=32.9MiB/s (34.5MB/s)(326MiB/9895msec); 0 zone resets 00:24:56.008 slat (usec): min=11, max=235, avg=24.32, stdev= 6.44 00:24:56.008 clat (usec): min=84, max=1630, avg=455.41, stdev=72.02 00:24:56.008 lat (usec): min=105, max=1742, avg=479.73, stdev=74.68 00:24:56.008 clat percentiles (usec): 00:24:56.008 | 50.000th=[ 453], 99.000th=[ 693], 99.900th=[ 807], 99.990th=[ 1319], 00:24:56.008 | 99.999th=[ 1631] 00:24:56.008 bw ( KiB/s): min=30040, max=36024, per=98.86%, avg=33342.68, stdev=1685.51, samples=19 00:24:56.008 iops : min= 7510, max= 9006, avg=8335.63, stdev=421.41, samples=19 00:24:56.008 lat (usec) : 20=0.01%, 100=5.48%, 250=29.75%, 500=54.92%, 750=9.71% 00:24:56.008 lat (usec) : 1000=0.12% 00:24:56.008 lat (msec) : 2=0.01% 00:24:56.008 cpu : usr=98.47%, sys=0.65%, ctx=24, majf=0, minf=7067 00:24:56.008 IO depths : 1=7.8%, 2=19.9%, 4=55.1%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:56.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.008 complete : 0=0.0%, 4=90.1%, 8=9.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.008 issued rwts: total=80869,83428,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:56.008 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:56.008 00:24:56.008 Run status group 0 (all jobs): 00:24:56.008 READ: bw=31.6MiB/s (33.1MB/s), 31.6MiB/s-31.6MiB/s (33.1MB/s-33.1MB/s), io=316MiB (331MB), run=10001-10001msec 00:24:56.008 WRITE: bw=32.9MiB/s (34.5MB/s), 32.9MiB/s-32.9MiB/s (34.5MB/s-34.5MB/s), io=326MiB (342MB), run=9895-9895msec 00:24:56.267 ----------------------------------------------------- 00:24:56.267 Suppressions used: 00:24:56.267 count bytes template 00:24:56.267 1 7 /usr/src/fio/parse.c 00:24:56.267 47 4512 /usr/src/fio/iolog.c 00:24:56.267 1 8 libtcmalloc_minimal.so 00:24:56.267 1 904 libcrypto.so 00:24:56.267 ----------------------------------------------------- 00:24:56.267 00:24:56.267 00:24:56.267 real 0m12.813s 00:24:56.267 user 0m13.284s 00:24:56.267 sys 0m0.819s 00:24:56.267 06:51:14 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:56.267 06:51:14 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:24:56.267 ************************************ 00:24:56.267 END TEST bdev_fio_rw_verify 00:24:56.267 ************************************ 00:24:56.267 06:51:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:24:56.267 06:51:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:56.267 06:51:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:24:56.267 06:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:56.267 06:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:24:56.267 06:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:24:56.267 06:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:24:56.267 06:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:24:56.267 06:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:24:56.267 06:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:24:56.267 06:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:24:56.267 06:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:56.267 06:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:24:56.267 06:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:24:56.267 06:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:24:56.267 06:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:24:56.267 06:51:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "13831e3d-fcb8-4107-a927-ff67df026812"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "13831e3d-fcb8-4107-a927-ff67df026812",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "13831e3d-fcb8-4107-a927-ff67df026812",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "c923ddce-e4ec-4302-95cd-31b8a393072a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "4f7eb07d-dc9f-4b0d-9082-10cf37381fed",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "a311010a-88eb-4fa9-9c6a-5550dc8487af",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:24:56.267 06:51:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:24:56.267 06:51:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:24:56.267 06:51:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:56.267 06:51:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:24:56.267 /home/vagrant/spdk_repo/spdk 00:24:56.267 06:51:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:24:56.267 06:51:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:24:56.267 00:24:56.267 real 0m13.029s 00:24:56.267 user 0m13.392s 00:24:56.267 sys 0m0.909s 00:24:56.267 06:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:56.267 06:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:24:56.267 ************************************ 00:24:56.267 END TEST bdev_fio 00:24:56.267 ************************************ 00:24:56.525 06:51:14 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:56.525 06:51:14 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:24:56.525 06:51:14 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:24:56.525 06:51:14 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:56.525 06:51:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:56.525 ************************************ 00:24:56.525 START TEST bdev_verify 00:24:56.525 ************************************ 00:24:56.525 06:51:14 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:24:56.525 [2024-12-06 06:51:15.025586] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:24:56.525 [2024-12-06 06:51:15.025760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91209 ] 00:24:56.783 [2024-12-06 06:51:15.201689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:56.783 [2024-12-06 06:51:15.334605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.783 [2024-12-06 06:51:15.334611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.347 Running I/O for 5 seconds... 00:24:59.655 11070.00 IOPS, 43.24 MiB/s [2024-12-06T06:51:19.238Z] 11633.00 IOPS, 45.44 MiB/s [2024-12-06T06:51:20.175Z] 11976.67 IOPS, 46.78 MiB/s [2024-12-06T06:51:21.112Z] 12427.00 IOPS, 48.54 MiB/s [2024-12-06T06:51:21.112Z] 12700.40 IOPS, 49.61 MiB/s 00:25:02.465 Latency(us) 00:25:02.465 [2024-12-06T06:51:21.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.465 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:02.465 Verification LBA range: start 0x0 length 0x2000 00:25:02.465 raid5f : 5.01 6359.63 24.84 0.00 0.00 30242.79 277.41 23116.33 00:25:02.465 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:02.465 Verification LBA range: start 0x2000 length 0x2000 00:25:02.465 raid5f : 5.02 6351.81 24.81 0.00 0.00 30395.51 266.24 22878.02 00:25:02.465 [2024-12-06T06:51:21.112Z] =================================================================================================================== 00:25:02.465 [2024-12-06T06:51:21.112Z] Total : 12711.44 49.65 0.00 0.00 30319.15 266.24 23116.33 00:25:03.843 00:25:03.843 real 0m7.298s 00:25:03.843 user 0m13.409s 00:25:03.843 sys 0m0.307s 00:25:03.843 06:51:22 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:03.843 06:51:22 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:25:03.843 ************************************ 00:25:03.843 END TEST bdev_verify 00:25:03.843 ************************************ 00:25:03.843 06:51:22 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:25:03.843 06:51:22 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:25:03.843 06:51:22 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:03.843 06:51:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:03.843 ************************************ 00:25:03.843 START TEST bdev_verify_big_io 00:25:03.843 ************************************ 00:25:03.843 06:51:22 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:25:03.843 [2024-12-06 06:51:22.387241] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:25:03.843 [2024-12-06 06:51:22.387454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91306 ] 00:25:04.102 [2024-12-06 06:51:22.567118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:04.102 [2024-12-06 06:51:22.701791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.102 [2024-12-06 06:51:22.701801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:04.670 Running I/O for 5 seconds... 00:25:06.986 506.00 IOPS, 31.62 MiB/s [2024-12-06T06:51:26.569Z] 507.00 IOPS, 31.69 MiB/s [2024-12-06T06:51:27.505Z] 528.00 IOPS, 33.00 MiB/s [2024-12-06T06:51:28.442Z] 602.75 IOPS, 37.67 MiB/s [2024-12-06T06:51:28.700Z] 672.40 IOPS, 42.02 MiB/s 00:25:10.053 Latency(us) 00:25:10.053 [2024-12-06T06:51:28.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.053 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:10.053 Verification LBA range: start 0x0 length 0x200 00:25:10.053 raid5f : 5.26 338.10 21.13 0.00 0.00 9466722.06 181.53 482344.96 00:25:10.053 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:10.053 Verification LBA range: start 0x200 length 0x200 00:25:10.053 raid5f : 5.24 351.24 21.95 0.00 0.00 8996728.39 231.80 472812.45 00:25:10.053 [2024-12-06T06:51:28.700Z] =================================================================================================================== 00:25:10.053 [2024-12-06T06:51:28.700Z] Total : 689.34 43.08 0.00 0.00 9227698.19 181.53 482344.96 00:25:11.428 00:25:11.429 real 0m7.598s 00:25:11.429 user 0m13.956s 00:25:11.429 sys 0m0.341s 00:25:11.429 06:51:29 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:11.429 06:51:29 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:25:11.429 ************************************ 00:25:11.429 END TEST bdev_verify_big_io 00:25:11.429 ************************************ 00:25:11.429 06:51:29 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:11.429 06:51:29 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:25:11.429 06:51:29 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:11.429 06:51:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:11.429 ************************************ 00:25:11.429 START TEST bdev_write_zeroes 00:25:11.429 ************************************ 00:25:11.429 06:51:29 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:11.686 [2024-12-06 06:51:30.089597] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:25:11.686 [2024-12-06 06:51:30.089771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91407 ] 00:25:11.686 [2024-12-06 06:51:30.279635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.943 [2024-12-06 06:51:30.437919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.509 Running I/O for 1 seconds... 00:25:13.442 18447.00 IOPS, 72.06 MiB/s 00:25:13.442 Latency(us) 00:25:13.442 [2024-12-06T06:51:32.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.442 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:13.442 raid5f : 1.01 18431.11 72.00 0.00 0.00 6917.00 2025.66 14477.50 00:25:13.442 [2024-12-06T06:51:32.089Z] =================================================================================================================== 00:25:13.442 [2024-12-06T06:51:32.089Z] Total : 18431.11 72.00 0.00 0.00 6917.00 2025.66 14477.50 00:25:14.816 ************************************ 00:25:14.816 END TEST bdev_write_zeroes 00:25:14.816 ************************************ 00:25:14.816 00:25:14.816 real 0m3.390s 00:25:14.816 user 0m2.924s 00:25:14.816 sys 0m0.327s 00:25:14.816 06:51:33 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:14.816 06:51:33 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:25:14.816 06:51:33 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:14.816 06:51:33 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:25:14.816 06:51:33 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:14.816 06:51:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:14.816 ************************************ 00:25:14.816 START TEST bdev_json_nonenclosed 00:25:14.816 ************************************ 00:25:14.816 06:51:33 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:15.074 [2024-12-06 06:51:33.517490] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:25:15.074 [2024-12-06 06:51:33.518077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91460 ] 00:25:15.074 [2024-12-06 06:51:33.719885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.333 [2024-12-06 06:51:33.873697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.333 [2024-12-06 06:51:33.874061] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:25:15.333 [2024-12-06 06:51:33.874120] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:25:15.333 [2024-12-06 06:51:33.874153] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:15.622 ************************************ 00:25:15.622 END TEST bdev_json_nonenclosed 00:25:15.622 ************************************ 00:25:15.622 00:25:15.622 real 0m0.758s 00:25:15.622 user 0m0.489s 00:25:15.622 sys 0m0.162s 00:25:15.622 06:51:34 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:15.622 06:51:34 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:25:15.622 06:51:34 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:15.622 06:51:34 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:25:15.622 06:51:34 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:15.622 06:51:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:15.622 ************************************ 00:25:15.622 START TEST bdev_json_nonarray 00:25:15.622 ************************************ 00:25:15.622 06:51:34 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:15.880 [2024-12-06 06:51:34.331284] Starting SPDK v25.01-pre git sha1 20bebc997 / DPDK 24.03.0 initialization... 00:25:15.880 [2024-12-06 06:51:34.332007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91486 ] 00:25:15.880 [2024-12-06 06:51:34.514423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.138 [2024-12-06 06:51:34.686063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.138 [2024-12-06 06:51:34.686228] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:25:16.138 [2024-12-06 06:51:34.686267] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:25:16.138 [2024-12-06 06:51:34.686304] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:16.396 ************************************ 00:25:16.396 END TEST bdev_json_nonarray 00:25:16.396 ************************************ 00:25:16.396 00:25:16.396 real 0m0.776s 00:25:16.396 user 0m0.512s 00:25:16.396 sys 0m0.158s 00:25:16.396 06:51:34 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:16.396 06:51:34 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:25:16.396 06:51:35 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:25:16.396 06:51:35 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:25:16.396 06:51:35 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:25:16.396 06:51:35 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:25:16.396 06:51:35 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:25:16.396 06:51:35 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:25:16.396 06:51:35 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:16.396 06:51:35 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:25:16.396 06:51:35 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:25:16.396 06:51:35 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:25:16.396 06:51:35 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:25:16.653 ************************************ 00:25:16.653 END TEST blockdev_raid5f 00:25:16.653 ************************************ 00:25:16.653 00:25:16.653 real 0m49.566s 00:25:16.653 user 1m7.811s 00:25:16.653 sys 0m5.354s 00:25:16.653 06:51:35 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:16.653 06:51:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:16.653 06:51:35 -- spdk/autotest.sh@194 -- # uname -s 00:25:16.653 06:51:35 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:25:16.653 06:51:35 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:25:16.653 06:51:35 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:25:16.653 06:51:35 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:25:16.653 06:51:35 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:25:16.653 06:51:35 -- spdk/autotest.sh@260 -- # timing_exit lib 00:25:16.653 06:51:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:16.653 06:51:35 -- common/autotest_common.sh@10 -- # set +x 00:25:16.653 06:51:35 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:25:16.653 06:51:35 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:25:16.653 06:51:35 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:25:16.653 06:51:35 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:25:16.653 06:51:35 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:25:16.653 06:51:35 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:25:16.653 06:51:35 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:25:16.653 06:51:35 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:25:16.653 06:51:35 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:25:16.653 06:51:35 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:25:16.653 06:51:35 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:25:16.653 06:51:35 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:25:16.653 06:51:35 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:25:16.653 06:51:35 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:25:16.653 06:51:35 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:25:16.653 06:51:35 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:25:16.653 06:51:35 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:25:16.653 06:51:35 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:25:16.653 06:51:35 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:25:16.654 06:51:35 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:25:16.654 06:51:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:16.654 06:51:35 -- common/autotest_common.sh@10 -- # set +x 00:25:16.654 06:51:35 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:25:16.654 06:51:35 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:25:16.654 06:51:35 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:25:16.654 06:51:35 -- common/autotest_common.sh@10 -- # set +x 00:25:18.553 INFO: APP EXITING 00:25:18.553 INFO: killing all VMs 00:25:18.553 INFO: killing vhost app 00:25:18.553 INFO: EXIT DONE 00:25:18.812 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:18.812 Waiting for block devices as requested 00:25:18.812 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:18.812 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:19.748 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:19.748 Cleaning 00:25:19.748 Removing: /var/run/dpdk/spdk0/config 00:25:19.748 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:19.748 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:19.748 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:19.748 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:19.748 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:19.748 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:19.748 Removing: /dev/shm/spdk_tgt_trace.pid56975 00:25:19.748 Removing: /var/run/dpdk/spdk0 00:25:19.748 Removing: /var/run/dpdk/spdk_pid56734 00:25:19.748 Removing: /var/run/dpdk/spdk_pid56975 00:25:19.748 Removing: /var/run/dpdk/spdk_pid57209 00:25:19.748 Removing: /var/run/dpdk/spdk_pid57319 00:25:19.748 Removing: /var/run/dpdk/spdk_pid57375 00:25:19.748 Removing: /var/run/dpdk/spdk_pid57503 00:25:19.748 Removing: /var/run/dpdk/spdk_pid57526 00:25:19.748 Removing: /var/run/dpdk/spdk_pid57731 00:25:19.748 Removing: /var/run/dpdk/spdk_pid57848 00:25:19.748 Removing: /var/run/dpdk/spdk_pid57955 00:25:19.748 Removing: /var/run/dpdk/spdk_pid58083 00:25:19.748 Removing: /var/run/dpdk/spdk_pid58191 00:25:19.748 Removing: /var/run/dpdk/spdk_pid58230 00:25:19.748 Removing: /var/run/dpdk/spdk_pid58272 00:25:19.748 Removing: /var/run/dpdk/spdk_pid58343 00:25:19.748 Removing: /var/run/dpdk/spdk_pid58460 00:25:19.748 Removing: /var/run/dpdk/spdk_pid58936 00:25:19.748 Removing: /var/run/dpdk/spdk_pid59017 00:25:19.748 Removing: /var/run/dpdk/spdk_pid59091 00:25:19.748 Removing: /var/run/dpdk/spdk_pid59112 00:25:19.748 Removing: /var/run/dpdk/spdk_pid59260 00:25:19.748 Removing: /var/run/dpdk/spdk_pid59276 00:25:19.748 Removing: /var/run/dpdk/spdk_pid59431 00:25:19.748 Removing: /var/run/dpdk/spdk_pid59452 00:25:19.748 Removing: /var/run/dpdk/spdk_pid59522 00:25:19.748 Removing: /var/run/dpdk/spdk_pid59540 00:25:19.748 Removing: /var/run/dpdk/spdk_pid59606 00:25:19.748 Removing: /var/run/dpdk/spdk_pid59633 00:25:19.748 Removing: /var/run/dpdk/spdk_pid59831 00:25:19.748 Removing: /var/run/dpdk/spdk_pid59872 00:25:19.748 Removing: /var/run/dpdk/spdk_pid59957 00:25:19.748 Removing: /var/run/dpdk/spdk_pid61344 00:25:19.748 Removing: /var/run/dpdk/spdk_pid61550 00:25:19.748 Removing: /var/run/dpdk/spdk_pid61697 00:25:19.748 Removing: /var/run/dpdk/spdk_pid62350 00:25:19.748 Removing: /var/run/dpdk/spdk_pid62567 00:25:19.748 Removing: /var/run/dpdk/spdk_pid62713 00:25:19.748 Removing: /var/run/dpdk/spdk_pid63373 00:25:19.748 Removing: /var/run/dpdk/spdk_pid63708 00:25:19.748 Removing: /var/run/dpdk/spdk_pid63858 00:25:19.748 Removing: /var/run/dpdk/spdk_pid65271 00:25:19.748 Removing: /var/run/dpdk/spdk_pid65529 00:25:19.748 Removing: /var/run/dpdk/spdk_pid65675 00:25:19.748 Removing: /var/run/dpdk/spdk_pid67092 00:25:19.748 Removing: /var/run/dpdk/spdk_pid67345 00:25:19.748 Removing: /var/run/dpdk/spdk_pid67496 00:25:19.748 Removing: /var/run/dpdk/spdk_pid68909 00:25:19.748 Removing: /var/run/dpdk/spdk_pid69360 00:25:19.748 Removing: /var/run/dpdk/spdk_pid69506 00:25:19.748 Removing: /var/run/dpdk/spdk_pid71020 00:25:19.748 Removing: /var/run/dpdk/spdk_pid71284 00:25:19.748 Removing: /var/run/dpdk/spdk_pid71430 00:25:19.748 Removing: /var/run/dpdk/spdk_pid72943 00:25:19.748 Removing: /var/run/dpdk/spdk_pid73209 00:25:19.748 Removing: /var/run/dpdk/spdk_pid73355 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74868 00:25:19.749 Removing: /var/run/dpdk/spdk_pid75362 00:25:19.749 Removing: /var/run/dpdk/spdk_pid75508 00:25:19.749 Removing: /var/run/dpdk/spdk_pid75656 00:25:19.749 Removing: /var/run/dpdk/spdk_pid76104 00:25:19.749 Removing: /var/run/dpdk/spdk_pid76873 00:25:19.749 Removing: /var/run/dpdk/spdk_pid77249 00:25:19.749 Removing: /var/run/dpdk/spdk_pid77955 00:25:19.749 Removing: /var/run/dpdk/spdk_pid78441 00:25:19.749 Removing: /var/run/dpdk/spdk_pid79235 00:25:19.749 Removing: /var/run/dpdk/spdk_pid79674 00:25:19.749 Removing: /var/run/dpdk/spdk_pid81665 00:25:19.749 Removing: /var/run/dpdk/spdk_pid82121 00:25:19.749 Removing: /var/run/dpdk/spdk_pid82571 00:25:19.749 Removing: /var/run/dpdk/spdk_pid84704 00:25:19.749 Removing: /var/run/dpdk/spdk_pid85195 00:25:19.749 Removing: /var/run/dpdk/spdk_pid85704 00:25:19.749 Removing: /var/run/dpdk/spdk_pid86774 00:25:19.749 Removing: /var/run/dpdk/spdk_pid87108 00:25:19.749 Removing: /var/run/dpdk/spdk_pid88068 00:25:19.749 Removing: /var/run/dpdk/spdk_pid88398 00:25:19.749 Removing: /var/run/dpdk/spdk_pid89361 00:25:19.749 Removing: /var/run/dpdk/spdk_pid89690 00:25:20.008 Removing: /var/run/dpdk/spdk_pid90372 00:25:20.008 Removing: /var/run/dpdk/spdk_pid90658 00:25:20.008 Removing: /var/run/dpdk/spdk_pid90724 00:25:20.008 Removing: /var/run/dpdk/spdk_pid90768 00:25:20.008 Removing: /var/run/dpdk/spdk_pid91029 00:25:20.008 Removing: /var/run/dpdk/spdk_pid91209 00:25:20.008 Removing: /var/run/dpdk/spdk_pid91306 00:25:20.008 Removing: /var/run/dpdk/spdk_pid91407 00:25:20.008 Removing: /var/run/dpdk/spdk_pid91460 00:25:20.008 Removing: /var/run/dpdk/spdk_pid91486 00:25:20.008 Clean 00:25:20.008 06:51:38 -- common/autotest_common.sh@1453 -- # return 0 00:25:20.008 06:51:38 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:25:20.008 06:51:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:20.008 06:51:38 -- common/autotest_common.sh@10 -- # set +x 00:25:20.008 06:51:38 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:25:20.008 06:51:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:20.008 06:51:38 -- common/autotest_common.sh@10 -- # set +x 00:25:20.008 06:51:38 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:20.008 06:51:38 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:25:20.008 06:51:38 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:25:20.008 06:51:38 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:25:20.008 06:51:38 -- spdk/autotest.sh@398 -- # hostname 00:25:20.008 06:51:38 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:25:20.267 geninfo: WARNING: invalid characters removed from testname! 00:25:46.811 06:52:04 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:50.087 06:52:08 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:53.368 06:52:11 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:55.278 06:52:13 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:58.581 06:52:16 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:01.116 06:52:19 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:03.650 06:52:22 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:26:03.650 06:52:22 -- spdk/autorun.sh@1 -- $ timing_finish 00:26:03.650 06:52:22 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:26:03.650 06:52:22 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:26:03.650 06:52:22 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:26:03.650 06:52:22 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:03.650 + [[ -n 5205 ]] 00:26:03.650 + sudo kill 5205 00:26:03.659 [Pipeline] } 00:26:03.675 [Pipeline] // timeout 00:26:03.680 [Pipeline] } 00:26:03.696 [Pipeline] // stage 00:26:03.701 [Pipeline] } 00:26:03.716 [Pipeline] // catchError 00:26:03.725 [Pipeline] stage 00:26:03.727 [Pipeline] { (Stop VM) 00:26:03.740 [Pipeline] sh 00:26:04.019 + vagrant halt 00:26:08.209 ==> default: Halting domain... 00:26:13.501 [Pipeline] sh 00:26:13.778 + vagrant destroy -f 00:26:17.967 ==> default: Removing domain... 00:26:17.978 [Pipeline] sh 00:26:18.256 + mv output /var/jenkins/workspace/raid-vg-autotest_3/output 00:26:18.265 [Pipeline] } 00:26:18.280 [Pipeline] // stage 00:26:18.286 [Pipeline] } 00:26:18.301 [Pipeline] // dir 00:26:18.306 [Pipeline] } 00:26:18.322 [Pipeline] // wrap 00:26:18.329 [Pipeline] } 00:26:18.344 [Pipeline] // catchError 00:26:18.354 [Pipeline] stage 00:26:18.357 [Pipeline] { (Epilogue) 00:26:18.369 [Pipeline] sh 00:26:18.687 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:25.278 [Pipeline] catchError 00:26:25.280 [Pipeline] { 00:26:25.288 [Pipeline] sh 00:26:25.565 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:25.565 Artifacts sizes are good 00:26:25.589 [Pipeline] } 00:26:25.605 [Pipeline] // catchError 00:26:25.616 [Pipeline] archiveArtifacts 00:26:25.624 Archiving artifacts 00:26:25.738 [Pipeline] cleanWs 00:26:25.749 [WS-CLEANUP] Deleting project workspace... 00:26:25.749 [WS-CLEANUP] Deferred wipeout is used... 00:26:25.755 [WS-CLEANUP] done 00:26:25.756 [Pipeline] } 00:26:25.772 [Pipeline] // stage 00:26:25.777 [Pipeline] } 00:26:25.791 [Pipeline] // node 00:26:25.796 [Pipeline] End of Pipeline 00:26:25.910 Finished: SUCCESS